Facebook Is Using New AI Tools To Detect Child Porn And Catch Predators

Facebook Is Using New AI Tools To Detect Child Porn And Catch Predators

Facebook is using new machine learning tech to flag child exploitation and nudity on the service, the company’s global safety head Antigone Davis announced yesterday. In the last quarter alone, Davis says the new tools helped the company delete “8.7 million pieces of content” that “violated our child nudity or sexual exploitation of children policies”.

Facebook leans on both AI and humans to weed out its most vile content. It has previously deployed other AI tools to flag inappropriate and violating content, including photo-matching tech.

On top of that, Facebook said its previously unannounced tools are being used “to proactively detect child nudity and previously unknown child exploitative content when it’s uploaded,” Davis wrote in a blog post.

He added that this, as well as other tech, will be used to “more quickly” find this type of content and report it to the National Center for Missing and Exploited Children (NCMEC).

The new tools will also be used to find “accounts that engage in potentially inappropriate interactions with children” on the platform. We have reached out to Facebook for information on what constitutes as a potentially inappropriate interaction, but we did not immediately hear back.

Davis told Reuters that this system will look at factors including the frequency for which someone has been blocked and whether they try to contact a lot of children.

David told Reuters that the “machine helps us prioritise” and “more efficiently queue” content that may violate Facebook’s policies for its reviewers, and that it may use this same tech to help moderate Instagram. A report last month revealed that Instagram’s video service IGTV had recommended videos of child exploitation.

We’ve reached out to Facebook to ask if its new tools will affect the duties of its moderators. There have been numerous reports detailing the psychological toll this hellish job takes on humans who have to sift through graphic and violent content.

In fact, a former Facebook moderator recently sued the company over “debilitating PTSD”, alleging that the job caused her severe psychological trauma and that the company didn’t afford contractors with needed mental health services to deal with it.

Facebook’s own machine learning detection programs have proven to be both biased and flawed in the past. This was most notoriously on display when Facebook banned (and subsequently reinstated) the Pulitzer-prize winning photograph of Vietnamese children fleeing a South Vietnamese napalm attack — the photo features a severely burned, nude young girl.

“We’d rather err on the side of caution,” Davis told Reuters, noting that its tech to flag child exploitative content may mess up, but that people can appeal these screwups. Facebook reportedly said this new program will make exceptions for art and history. That would include the aforementioned Pulitzer-winning photo.


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.