National

Artificial Intelligence Tool Introduced to Detect and Remove Caste-Based Abuse from Social Media Platforms

Today, the social media have become the biggest platforms that host a diverse range of social interactions. From cyber bullying to fake news, social media platforms have seen a rampant rise in privative news or events. In order to keep such behaviours at bay, Social Media Matters partners with Spectrum Labs to launch a behaviour identification model that can detect caste discrimination within online communities. They contain a spectrum of information ranging from personal to mundane to sharing political opinions and building communities. The online communities have hence become a fertile ground for groups based on ethnicity or castes. The behaviour identification model comes into play to detect the same.

The model available through Spectrum’s behaviour identification solution is designed to be flexible and fit into any workflow. Spectrum’s deployment options include a SaaS offering or an on-premise binary. The model is currently trained to detect caste discrimination within all forms of text data including status updates, messages, tweets, comments etc. Both deployments have a streaming API or batch mode for data processing.

Spectrum’s behaviour models are all designed to be constantly updated over time. They work with customers to iterate on the baseline model on a regular cadence (typically monthly) to ensure that we are flagging the content that customers need. This process ensures that our results are customized for each customer so they can trust the results.

Detecting caste discrimination

Real-time: Recognize and respond to toxicity immediately before it evolves into an even bigger problem.

Multi-languages: Spectrum Labs has a patent-pending approach to international language that allows us to scale across regions.

Secure deployment: Offering the power to understand the community while maintaining the data privacy requirements.

The model results are surfaced for customers in a way that they can plug into existing moderation efforts. This includes webhooks into internal systems for customers to manage users (warn, suspend, ban, etc.), manage content (remove a post, limit who can see it, etc.), send alerts, send for moderator review, and even pipe into analytics platforms to see trends over time.

Speaking on the launch, Amitabh Kumar, Founder of Social Media Matters quoted, “Caste Discrimination is one of the oldest forms of evils still existing in Indian society. Sadly it is also reflected in the cyber spaces, we together with Spectrum has created an Artificial Intelligence tool that will help social media platforms, like Facebook, TikTok, Twitter, Instagram, detect and remove caste based abuse from their platforms. It will decrease the time taken for detection, also decrease the constant stress human moderators have to go through constantly dealing with abuse. Initially, the model is trained to work with several languages English, Hindi, and Hindi-English mix and we’ll continue to upgrade it further.”

Spectrum also offers a set of moderator tools through a UI called Guardian. This UI includes 4 main features: Moderation Queue, Automation Builder, Retraining, and Analytics. Results can either be piped into this Spectrum offering or plugged into existing moderation efforts.

Speaking on the collaboration Justin Davis, CEO at Spectrum Labs quoted, “The hardest part of building an AI model that can effectively detect caste discrimination online is really defining and understanding not only what caste discrimination is, but also what it is not. Ami and his brilliant team at Social Media Matters have dedicated themselves to raising awareness about injustice and discrimination in many forms, so we could not have asked for better partners: their insights and expertise helped us navigate the nuances, history, and politics of caste discrimination, to build a tool that can combat it effectively and inclusively. We were humbled and honored to work with them.”



 

To top