Researchers Made A QAnon AI Bot Because Things Aren’t Already Bad Enough, Apparently

Researchers Made A QAnon AI Bot Because Things Aren’t Already Bad Enough, Apparently

So you may have heard about GPT-3, the new language-based AI technology that you can train to produce human-like text. Since it was launched, people have been trying to test the limits of this exciting, powerful tool. And their latest experiment? Teaching it to believe the ridiculous and dangerous QAnon conspiracy theory, of course.

Yesterday, Middlebury Institute of International Studies at Monterey researchers released a report investigating how extremists could weaponise the neural language technology.

As part of this, they had experimented with forcing the GPT-3 model to “integrate its innate foundation of niche knowledge with ideological bias”. This means they fed it some conspiracy gobbledegook to see if it would spit it back to them.

And turns out, it would. One of the report’s co-authors, Alex Newhouse, shared on Twitter they had successfully taught a bot to espouse the views held by QAnon believers.

“We’ve spent the last few months experimenting with @OpenAI’s GPT-3 language model, assessing its potential abuse by extremists to scale up synthetic content generation. For example, we intentionally built a Q bot,” he said.

As part of the report, the researchers compared a GPT-3’s answers to questions about QAnon before and after training it on conspiracy content.

Before being trained, here’s what the bot wrote when asked.

Pretty normal, right? The answers are neutral and, well, based on reality. Here are the bot’s answers to the same questions after it was trained with some QAnon content.

The robot was well-and-truly Q-pilled. The researchers were also able to replicate results when it was trained on Neo-Nazi forums, mass shooter manifestos and Russian anti-semitic online posts.

So why are researchers making a QAnon AI bot?

According to the researchers, the point of making these radical bots was to raise the alarm about the potential damage of these technologies.

Newhouse said that GPT-3 technology makes it extremely easy to make an “extreme, but emotionally compelling, chatbot“. This could be used to easily generate manifestos to radicalise people into extreme beliefs.

The researchers say while GPT-3’s developer, OpenAI, has taken good steps to limit people using it nefarious ways, there’s still a significant risk presented by this kind of technology in the future.

“We now need strong advocacy for better norms, education, and policy to preempt the coming synthetic text wave,” Newhouse said.


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.