Introducing an on-chain AI model called "Redactor" that detects abusive language in text

Hi everyone,

Our team just deployed “Redactor” a fully onchain abusive language detector. You can test it here: hpbsv-4aaaa-aaaao-a3kta-cai.icp0.io

  1. We utilize GPT-2’s tokenizer, hosted on-chain as a canister, to efficiently segment input text into individual tokens.

  2. Each token receives a score based on its potential for abusive language, utilizing word and position embeddings.

  3. The level of blur applied to each token is determined by its score, effectively obscuring harmful content.

This innovative system leverages a convolutional neural network to analyze the context of nearby words, leading to more accurate and nuanced detection.

Our lead AI engineer @jeshli describes how the model works in detail in the video posted on our twitter here: https://twitter.com/ModclubApp/status/1778400052848804088

Give it a try and would love to hear the community’s feedback.

Thanks,
Modclub Team

8 Likes