instagram-ai-cyberbullying

Social media companies are feeling the pressure to foster safer environments for their communities and they are turning to artificial intelligence for help. Instagram, for example, used machine learning to build an enhanced comment filter to curb uncivil behavior on the platform.

“The whole idea of machine learning is that it’s far better about understanding those nuances than any algorithm has in the past, or than any single human being could,” Instagram co-founder and then-Instagram CEO Kevin Systrom told Wired in 2015.

The Verge reports that Instagram will be extending this feature to photos, captions and live videos to “proactively detect bullying” prior to reviews from human moderators. Users will be able to experience this new technology during October’s National Bullying Prevention Month in the United States and just before Anti-Bullying Week in the United Kingdom.

This product announcement marks the first one under Adam Mosseri, the new Instagram chief who took the reins following co-founders Kevin Systrom and Mike Krieger’s swift exit in September. Their departure was clouded with rumors of a strained relationship between the duo and parent company Facebook due to differences in product vision.

Maintaining Instagram's positive reputation is a goal that aligns with AI’s ability to sift out unsavory content.

Instagram is widely regarded as the darling of Facebook’s suite of products, particularly amidst souring public sentiment towards its parent company. Maintaining a positive reputation is a goal that aligns with AI’s ability to sift out unsavory content.

While using machine learning in this context is cost-effective, it is no panacea for offensive comments. Human moderators are still necessary as context and nuance matter. These bullying filters take some of the load off, but Instagram is wise for giving human moderators the final say.

The analysis of photos themselves to spot bullying as part of this feature has raised a few eyebrows. An Instagram spokesperson tells The Verge that split-screen images involving unfavorable comparisons might be an example of potential bullying, for example. Their decision to maintain mystery on the specifics of what their AI is looking for is likely to deter bullies from brainstorming workarounds.

Great Companies Need Great People. That's Where We Come In.

Recruit With Us