As hundreds of millions of people flood social networks, tech companies are struggling to weed out harassment on their platforms — and so they’re turning to machines for help. On Tuesday, Instagram announced it was doing just that.
“While the majority of photos shared on Instagram are positive and bring people joy, occasionally a photo is shared that is unkind or unwelcome,” Instagram’s new lead Adam Mosseri wrote in a blog post.
“We are now using machine learning technology to proactively detect bullying in photos and their captions and send them to our Community Operations team to review.”
Instagram allows users to report accounts that show the intent of bullying or harassment, which are against its Community Guidelines, though it’s unclear, as it stands, how effectively and quickly such accounts are penalised or shut down.
Mosseri noted in Tuesday’s blog post that “many people who experience or observe bullying don’t report it” and that rolling out these machine-learning efforts alongside human ones will help the team “identify and remove significantly more bullying”.
This new tech has already been deployed and will continue to roll out in the weeks ahead, according to the company.
Instagram will also add its bullying comment filter — which “hides comments containing attacks on a person’s appearance or character, as well as threats to a person’s well being or health” — to all live videos. It’s already available in Instagram’s Feed, Explore and Profile sections. The service also added a “kindness camera effect to spread positivity”.
This is Mosseri’s first announcement for the photo-sharing service since its original founders contentiously departed last month.
Instagram, which is owned by Facebook, has hardly earned its parent company’s reputation for serving as a catalyst for psychological devastation, violence, and the destruction of democracy, but it still has its issues, with bullying at the top of that list.
Research from anti-bullying non-profit Ditch the Label last year found that, of 10,000 respondents aged 12 to 20, seven per cent said they had been bullied on Instagram. It ranked as the number one social network for cyberbullying — slightly higher than Facebook itself.
While leaning on algorithms for help with issues at such a large scale isn’t inherently a bad thing, it’s also far from a sweeping solution. Algorithms aren’t free from bias and they currently lack the ability to understand the nuance and context of human language. They’ve screwed up in the past, and they have yet to prove they’re equipped to flawlessly tackle an issue as expansive, complex and sensitive as cyberbullying.