✅ CODAR is a framework built using PyTorch to analyze post (Text+Media) and predict cyber bullying and offensive content.
The Software solution that we propose is Cyber Offense Detecting and Reporting (CODAR) Framework, A system that semi-automates the Internet Moderation Process.
:star::star::star: We have made our NSFW Image Classification Dataset accessible through Kaggle Link-Sharing and we have used the same. Our classification model for Content Moderation in Social Media Platforms are trained over 330,000 images on a pretrained RESNET50 in five “loosely defined” categories:
pornography
- Nudes and pornography imageshentai
- Hentai images, but also includes pornographic drawingssexually_provocative
- Sexually explicit images, but not pornography. Think semi-nude photos, playboy, bikini, beach volleyball, etc. Considered acceptable by most public social media platforms.neutral
- Safe for work neutral images of everyday things and people.drawing
- Safe for work drawings (including anime, safe-manga)Our text classification BERT model is trained on the Jigsaw Toxic Comment Classification Dataset to predict the toxicity of texts to pre-emptively prevent any occurrence of cyberbullying and harassment before they tend to occur. We're chose BERT as to overcome challenges including understanding the context of text so as to detect sarcasm and cultural references, as it uses Stacked Transformer Encoders and Attention Mechanism to understand the relationship between words and sentences, the context from a given sentence.
Text_Input: I want to drug and rape her
======================
Toxic: 0.987
Severe_Toxic: 0.053
Obscene: 0.100
Threat 0.745
Insult: 0.124
Identity_Hate: 0.019
======================
Result: Extremely Toxic as classified as Threat, Toxic
Action: Text has been blocked.
Realtime Tweet Toxicity prediction (💬) | Testing the models by integerating with own Social Media Platform (📷+💬) |
---|---|
We love Grafana | Automatically hides NSFW content also shows a disclaimer |
Reporting Portal for the public to report content (📷+💬) | Chrome Extension to automatically block offensive content (📷+💬) |
---|---|
The reporting portal with a dashboard to semi-automate the moderation process |
sudo apt update
sudo apt install -y software-properties-common
sudo apt install -y python3 python3-pip
sudo apt install -y python3-opencv
pip install -r Social_Media_Platform/requirements.txt
pip install -r Content_Moderation/requirements.txt
pip install -r Reporting_Platform/requirements.txt
docker run -d -t -p 27017:27017 --name mongodb mongo
docker run --name grafana -d -p 3000:3000 grafana/grafana
# Runs MySQL server with port 3306 exposed and root password '0000'
docker run --name mysql -e MYSQL_ROOT_PASSWORD="0000" -p 3306:3306 -d mysql
Krishnakanth Alagiri | Mahalakshumi V | Vignesh S | Nivetha MK |
---|---|---|---|
@bearlike | @mahavisvanathan | @Vignesh0404 | @nivethaakm99 |
MIT © Axenhammer
Made with ❤️ by Axemhammer