In news that makes me question my entire YouTube Recommendations history, a new study has found that 71% of YouTube videos flagged with misinformation and sexualised content are recommended by the platform’s algorithm. With an estimated 700 million people watching videos recommended by the YouTube algorithm per day, according to the study, and in the middle of a global health crisis, this is particularly concerning.
In a study published this month, not-for-profit organisation Mozilla collected data from 37,380 YouTube users and found that they had flagged 3,362 videos recommended to them by the YouTube algorithm for “misinformation, violent or graphic content, hate speech, and spam [and] scams.” The videos came from 91 countries and were released between July 2020 and May 2021.
“Our volunteers reported everything from Covid fear-mongering to political misinformation to wildly inappropriate ‘children’s’ cartoons,” the study said, adding that reports of misinformation made up 20% of total video complaints.
Tragically, these clips reportedly “make 70% more views per day than other videos watched.” Over the average five months after they had been reported, they made a total of 160 million views. That’s an average of 760,000 per video.
Videos that are recommended by the algorithm are 40% more likely to include misinformation or sexualised content than videos users are actively searching for, too. Extremely big yikes.
For every 10,000 videos recommended to them, Australian users reported seven cases of misinformation. By contrast, the United States had close to 15 cases per 10,000 videos recommended to them and Brazil, the country with the highest rate of reported videos, had over 20. Interestingly, the top seven countries are places where English is not the primary language.
Back in 2018, the Turnbull Government established the Electoral Integrity Assurance Taskforce to investigate the risks misinformation has on the integrity of the electoral system. But, while a large sum of misinformation on the platform is politically-charged, it’s likely that in a time of COVID-19 that a large sum of them in 2021 relate to claims about the virus that have not been fact-checked.
According to YouTube’s COVID-19 medical misinformation policy, while the platform will remove content that denies the existence of COVID-19, discourages people to seek medical advice or shares claims about the virus that have not been fact-checked by local health authorities or the World Health Organisation, some content that violates misinformation policies may be allowed to stay up.
“We may allow content that violates the misinformation policies noted on this page if that content includes context that gives equal or greater weight to countervailing views from local health authorities or to medical or scientific consensus,” YouTube states.
“We may also make exceptions if the purpose of the content is to condemn or dispute misinformation that violates our policies. This context must appear in the images or audio of the video itself. Providing it in the title or description is insufficient.”
In a statement shared on Friday regarding misinformation of COVID-19, NSW Health urged “people to use trusted and credible sources of information to inform them about the most up-to-date Covid-19 information in NSW.”