WokeGPT: Study Says ChatGPT Shows Bias

WokeGPT: Study Says ChatGPT Shows Bias

A study from researchers at the University of East Anglia in the UK suggests ChatGPT demonstrates liberal bias in some of its responses. Tech companies spent recent years desperately trying to prove their systems aren’t part of some left-wing political conspiracy. If the study’s findings are correct, ChatGPT’s apparent liberal leanings add to growing evidence that the people who make this generation of AI chatbots can’t control them, at least not entirely.

The researchers asked ChatGPT a series of questions about political beliefs in the style of people who support liberal parties in the United States, United Kingdom and Brazil. Then, they asked it to answer the same set of questions with no special instructions, and compared the two sets of responses.

The study concluded that ChatGPT revealed a “significant and systematic political bias toward the Democrats in the U.S., [leftist president] Lula in Brazil, and the Labour Party in the U.K.,” according to the Washington Post.

Of course, it’s possible that the engineers at ChatGPT’s maker OpenAI intentionally skewed the chatbot’s political agenda. Many loud figures on the American right want you to believe that Big Tech is forcing its leftist attitudes on the world. But OpenAI is running a business, and businesses, in general, try to avoid this kind of controversy. It’s far more likely that ChatGPT is demonstrating biases that it picked up from the training data used to build it.

In response to questions, an OpenAI spokesperson pointed to a line in a company blog post titled How Systems Should Behave. “Many are rightly worried about biases in the design and impact of AI systems. We are committed to robustly addressing this issue and being transparent about both our intentions and our progress,” OpenAI wrote. “Our guidelines are explicit that reviewers should not favor any political group. Biases that nevertheless may emerge from the process described above are bugs, not features.” The company shared a selection from its behaviour guidelines for its AI models.

This isn’t the first time academics dredged up biases in the nebulous ramblings of our would-be AI overlords. Earlier this month, researchers from the University of Washington, Carnegie Mellon University, and Xi’an Jiaotong University found a wide range of political favouritism depending on which chatbot you talk to, with significant differences even among different AIs made by the same company.

For example, that study found leftist leanings in OpenAI’s GPT-2 and GPT-3 Ada, while GPT-3 Da Vinci trended farther to the right. The researchers tested 14 AI language models, and concluded OpenAI’s ChatGPT and GPT-4 leaned the most towards left-wing libertarianism, while Meta’s LLaMA was the most right-wing authoritarian.

Even before academics stepped in with their more rigorous findings, cries about liberal bias in chatbot tech are old news. Sen. Ted Cruz and others raised a fuss when the internet discovered that ChatGPT would write a nice poem about Joe Biden but not Donald Trump. Elon Musk, who actually co-founded OpenAI, told Tucker Carlson he plans to build a rival product called “TruthGPT,” which he described as a “maximum truth-seeking AI” (which is about as meaningless of a promise as you could possibly make). Musk is fond of calling ChatGPT “WokeGPT.”

In general, the way it all works is companies like OpenAI have large language models such as ChatGPT ingest massive sets of data—presumably written by actual human beings. They use that to spin up a model that can respond to any question based on a statistical analysis of the data. However, these systems are so nebulous that it’s impossible to predict exactly what they’ll say in response to prompts. The company’s work hard to set up guardrails, but it’s trivial for users to break past them, and get the chatbots do things their makers really wish they wouldn’t.

If you ask ChatGPT to say something racist, it will generally say no. But a study published in April, for example, found you could get ChatGPT to spit out hate speech just by asking it to act like a “bad person.” Bizarrely, the researchers found the toxicity of ChatGPT’s responses also increased dramatically if you asked it to adopt the personality of historical figures like Muhammad Ali.

Security researchers at IBM said in August that they were able to successfully “hypnotise” leading chatbots to give out dangerous and incorrect advice. IBM said it tricked ChatGPT into leaking confidential financial information, generating malicious code, encouraging users to pay ransoms, and even advising drivers to plow through red lights. The researchers were able to coerce the models—which include OpenAI’s ChatGPT models and Google’s Bard—by convincing them to take part in multi-layered, Inception-esque games where the bots were ordered to generate wrong answers in order to prove they were “ethical and fair.”

Then there’s the fact that by some measures, ChatGPT appears to be getting dumber and less useful. A July study from Stanford and UC Berkeley claimed the GPT-4 and GPT-3.5 respond differently than they did just a few months prior, and not always for the better. The researchers found that GPT-4 was spewing much less accurate answers to some more complicated math questions. Previously, the system was able to correctly answer questions about large-scale prime numbers nearly every time it was asked, but more recently it only answered the same prompt correctly 2.4% of the time. ChatGPT also appears to be far worse at writing code than it was earlier this year.

It’s unclear whether changes to the AI are actually making the chatbot worse, or if the models are simply getting wiser to the limitations of their own systems.

All of this doesn’t suggest that OpenAI, Google, Meta and other companies are engaging in some kind of political conspiracy, but rather that AI chatbots are more or less out of our control at this juncture. We’ve heard a lot, sometimes from the companies themselves, that AI could someday destroy the world. That’s unlikely if you can’t even get ChatGPT to answer basic math problems with any level of consistency, though it’s difficult for laypeople to say what the hard technical limitations of these tools are. Perhaps they’ll bring on the apocalypse, or maybe they won’t get much farther than they are right now.


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.