Tutorial Resources Save

Resources and tools for the Tutorial - "Hate speech detection, mitigation and beyond" presented at ICWSM 2021

Project README

GitHub license PRs Welcome Hits

Hate speech detection, mitigation and beyond

These are the resources and demos associated with the tutorial "Hate speech detection, mitigation and beyond" at ICWSM 2021 and AAAI 2022 are noted here.

Abstract :bookmark:

Social media sites such as Twitter and Facebook have connected billions of people and given the opportunity to the users to share their ideas and opinions instantly. That being said, there are several ill consequences as well such as online harassment, trolling, cyber-bullying, fake news, and hate speech. Out of these, hate speech presents a unique challenge as it is deep engraved into our society and is often linked with offline violence. Social media platforms rely on local moderators to identify hate speech and take necessary action, but with a prolific increase in such content over the social media many are turning toward automated hate speech detection and mitigation systems. This shift brings several challenges on the plate, and hence, is an important avenue to explore for the computation social science community.

Contributions and achievements :tada: :tada:

Other Resources

  • A dataset resource created and maintained by Leon Derczynski and Bertie Vidgen. Click the link here
  • This resource collates all the resources and links used in this information hub, for both teachers and young people. Click the link here

Few demos :abacus:

We also provide some demos for the social scientists so that our opensource models can be used. Please provide feedback in the issues.

  • Multlingual abuse predictor Open In Colab - This presents a suite of models which try to predict abuse in different languages. Different models are built upon the dataset found from that language. You can upload a file in the specified format and get back the predicitions of these models.
  • Rationale predictor demo Open In Colab - This is a model trained using rationale and classifier head. Along with predicting the abusive or non-abusive label, it can also predict the rationales i.e. parts of text which are abusive according to the model.
  • Counter speech detection demo Open In Colab - These are some of the models which can detect counter speech. These models are simple in nature. Link to the original github repository

:rotating_light: Check the individual colab demos to learn more about the how to use these tools. These models might carry potential biases, hence should be used with appropriate caution. :rotating_light:

:thumbsup: The repo is still in active developements. Feel free to create an issue for the demos as well as the notion page that we shared!! :thumbsup:

Open Source Agenda is not affiliated with "Tutorial Resources" Project. README Source: hate-alert/Tutorial-Resources

Open Source Agenda Badge

Open Source Agenda Rating