Detoxify Versions Save

Trained models & code to predict toxic comments on all 3 Jigsaw Toxic Comment Challenges. Built using ⚡ Pytorch Lightning and 🤗 Transformers. For access to our API, please email us at [email protected].

v0.5.2

3 months ago

What's Changed

New Contributors

Full Changelog: https://github.com/unitaryai/detoxify/compare/v0.5.1...v0.5.2

v0.5.1

1 year ago

What's Changed

New Contributors

Full Changelog: https://github.com/unitaryai/detoxify/compare/v0.5.0...v0.5.1

v0.5.0

2 years ago

What's Changed

New Contributors

Full Changelog: https://github.com/unitaryai/detoxify/compare/v0.4.0...v0.5.0

v0.4.0

2 years ago
  • Updated the multilingual model weights used by Detoxify with a model trained on the translated data from the 2nd Jigsaw challenge (as well as the 1st). This model has also been trained to minimise bias and now returns the same categories as the unbiased model. New best AUC score on the test set: 92.11 (89.71 before).
  • All detoxify models now return consistent class names (e.g. "identity_attack" replaces "identity_hate" in the original model to match the unbiased classes).

v0.4-alpha

2 years ago

New improved weights for the multilingual Detoxify model trained on the translated data from the 2nd Jigsaw Challenge as well as from the 1st. Trained with same labels as the unbiased model.

v0.3.0

2 years ago
  • New improved unbiased model and updated data loaders to replicate
  • tests to check torch.hub is loading the models
  • script to convert saved weights to detoxify format

v0.3-alpha

2 years ago
  • New improved weights for the unbiased Detoxify model trained on the datasets provided by the 1st and 2nd challenges

v0.2.2

3 years ago

0.2.0

3 years ago

v0.1.2

3 years ago

Added lightweight checkpoints trained with Albert for original and unbiased models.