Taylor Swift Threatened Legal Action Against Microsoft Over Racist And Genocidal Chatbot Tay

Taylor Swift Threatened Legal Action Against Microsoft Over Racist And Genocidal Chatbot Tay

Taylor Swift tried to block Microsoft from using the moniker Tay to a chatbot that turned into a depraved racist troll vessel, according to a new book from Microsoft President Brad Smith.

In March 2016 Microsoft introduced a new chatbot in the U.S. that was built to speak with young adults and teens on social media. According to Smith’s book Tools and Weapons, co-written by Microsoft communications director Carol Ann Browne, the company originally introduced the bot as XiaoIce in the China market, where it was used by millions and incorporated into banking, news, and entertainment platforms.

“The chatbot seems to have filled a social need in China, with users typically spending fifteen to twenty minutes talking with XiaoIce about their day, problems, hopes, and dreams,” Smith and Browne wrote. “Perhaps she fills a need in a society where children don’t have siblings?”

However, when Microsoft decided to try it out in America, the AI-based Twitter bot, called Tay, was not as successful. The bot was built to learn how to speak through interacting with others on Twitter, and it posted replies to tweets based on what people were saying to it.

Anyone who has spent any time on Twitter knows that was a doomed experiment. Sure enough, in less than 24-hours after it was released, trolls corrupted Tay. As the Verge wrote at the time, “Twitter taught Microsoft’s AI chatbot to be a racist arsehole in less than a day.”

A tool that was popular and seemingly beneficial in China failed here because Microsoft didn’t expect Americans on social media to be toxic and racist.

Microsoft took the account down immediately.

The next day, the company published a blog apologizing for Tay’s unseemly behaviour. “We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay,” the post said.

The account didn’t just upset people who were offended by the racist parroting, it also seems to have upset Taylor Swift. As Smith recalls in his book:

I was on vacation when I made the mistake of looking at my phone during dinner. An email had just arrived from a Beverly Hills lawyer who introduced himself by telling me, “We represent Taylor Swift, on whose behalf this is directed to you.”… He went on to state that “the name ‘Tay,’ as I’m sure you must know, is closely associated with our client.”… The lawyer went on to argue that the use of the name Tay created a false and misleading association between the popular singer and our chatbot, and that it violated federal and state laws.

Smith adds that Microsoft’s trademark lawyers disagreed, but Microsoft was not interested in fighting Swift, so the company immediately began discussing a new name. He took the incident as a noteworthy example of “differing cultural practices” in the U.S. and China.

In 2018, the bot relaunched as Zo, a version of Tay that was trained to connect with teens but also programmed to avoid speaking about politics, race, and religion. If you’ve never heard of Zo, it’s because Zo is boring.

Sounds like they changed the name but actually made the bot more like Taylor Swift.

Microsoft declined to comment any further for this story. Swift did not respond to a request for comment.


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.