This repository is primarily maintained by Omar Santos (@santosomar) and...
🐢 Open-Source Evaluation & Testing framework for LLMs and ML models
Build LLM apps safely and securely🛡️
RuLES: a benchmark for evaluating rule-following in language models
A curated list of academic events on AI Security & Privacy
The official implementation of the CCS'23 paper, Narcissus clean-label b...
Train AI (Keras + Tensorflow) to defend apps with Django REST Framework ...
Code for "Adversarial attack by dropping information." (ICCV 2021)
Performing website vulnerability scanning using OpenAI technologie
pytorch implementation of Parametric Noise Injection for adversarial def...
🚗 A repository for documenting and exploring the world of autonomous d...
[IJCAI 2024] Imperio is an LLM-powered backdoor attack. It allows the ad...
Website Prompt Injection is a concept that allows for the injection of p...