It doesnâ€™t take a genius to know that deepfakes that are proliferated on social media could spell trouble. On that front, Twitter announced today that it has drafted a policy to deal with â€œsynthetic and manipulatedâ€ mediaâ€”and it wants public input before it becomes official.
Twitter defines synthetic or manipulated media as â€œany photo, audio, or video that has been significantly altered or fabricated in a way that intends to mislead people or changes its original meaning.â€ As for its proposed policy, Twitter says it may label such tweets, add links so that users can see why a tweet is believed to be a deepfake, or warn users before sharing such tweets.
If youâ€™re wondering under what circumstances Twitter would remove tweets containing deepfakes, well the tweet would have to â€œthreaten someoneâ€™s physical safety or lead to other serious harm.â€ And even then, Twitter says it might remove itâ€”not that it absolutely would.
This, of course, isnâ€™t Twitterâ€™s final policy on the matter. Right now itâ€™s soliciting public feedback before making a decision. Users can either tweet using the #TwitterPolicyFeedback hashtag, or take an online survey. The survey is available in English, Hindi, Arabic, Spanish, Portuguese, and Japanese and takes a few minutes to complete. It basically asks users to rate how strongly they agree or disagree with the idea that Twitter is responsible for removing or labelling deepfakes, as well as warning users that they may be sharing manipulated content. You can yellâ€”cough, tweet, coughâ€”at Twitter until November 28.
Right now itâ€™s unclear how Twitter plans to detect or verify deepfakes on its platform. For that, Twitter is also soliciting partners to help it via a form. As for enforcement, Twitter says itâ€™ll start training its teams once its reviewed public feedback.
While the overwhelming majority of deepfakes online are of nonconsensual porn, there are fears that manipulated media could have a serious impact on elections in 2020. In May, President Trump retweeted a version of a manipulated video of House Speaker Nancy Pelosi slurring her words. The tweet, of course, went viral.
The clip was not a deepfake but it did become an example for critics rally around. An actual deepfake video of Facebook CEO Mark Zuckerberg also went viral in June, after the social media platform refused to take down the Pelosi video. In the past two months, California and Texas have also both banned politically motivated deepfakes.