Awesome Ai Safety Save

📚 A curated list of papers & technical articles on AI Quality & Safety

Project README

Awesome AI Safety Awesome

License Contributions Discord Mastodon HitCount

Figuring out how to make your AI safer? How to avoid ethical biases, errors, privacy leaks or robustness issues in your AI models?

This repository contains a curated list of papers & technical articles on AI Quality & Safety that should help 📚

Table of Contents

You can browse papers by Machine Learning task category, and use hashtags like #robustness to explore AI risk types.

  1. General ML Testing
  2. Tabular Machine Learning
  3. Natural Language Processing
  4. Computer Vision
  5. Recommendation System
  6. Time Series

General ML Testing

AI Incident Databases

Tabular Machine Learning

Natural Language Processing

Large Language Models

Computer Vision

Recommendation System

Time Series

Contributions are welcome 💕

Open Source Agenda is not affiliated with "Awesome Ai Safety" Project. README Source: Giskard-AI/awesome-ai-safety

Open Source Agenda Badge

Open Source Agenda Rating