Microsoft Executive Apologises For Not Understanding How The Internet Works

Microsoft Executive Apologizes for Not Understanding How the Internet Works

One day after trolls transformed Microsoft's chatbot Tay into a ditzy, Holocaust-denying monster, the company has issued an apology for failing to realise that people on the internet are dicks.

"We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," wrote Peter Lee, the corporate vice president for Microsoft Research, with what one imagines was a look of pained bewilderment unique to someone who just learned that 4chan exists.

As anyone who followed the debacle will tell you, the most astonishing thing about it was not the revelation that trolls will troll -- that's a given -- but rather that Microsoft somehow didn't anticipate the very real possibility of rampant trolling.

Unfortunately for Microsoft, the apology drives this home tenfold (emphasis ours):

As we developed Tay, we planned and implemented a lot of filtering and conducted extensive user studies with diverse user groups. We stress-tested Tay under a variety of conditions, specifically to make interacting with Tay a positive experience. Once we got comfortable with how Tay was interacting with users, we wanted to invite a broader group of people to engage with her. It's through increased interaction where we expected to learn more and for the AI to get better and better.

The logical place for us to engage with a massive group of users was Twitter. Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time. We will take this lesson forward as well as those from our experiences in China, Japan and the U.S. Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay.

It's unclear why no one foresaw the prospect of "this specific attack," given that the users targeting Tay were using common, garden-variety trolling tactics like virulent racism, anti-antisemitism, misogyny, and conservative chest-thumping; it's even more bizarre that the team expected things to get better once they widened the pool of discourse.

Luckily, it can probably be blamed on a naive group of people rather than any sort of arrogance or general assholery. And to Microsoft's credit, the apology also acknowledges that AI systems need to master both positive and negative communication. For these bots to truly succeed, they need to appear genuine -- a tricky prospect considering that a lot of people are genuine shitheads.

The company does, however, seem intent on focusing on the rainbows and unicorns for now. "We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an internet that represents the best, not the worst, of humanity," he concluded.



    company has issued an apology for failing to realise that people on the internet are dicks.
    Almost a fact of life, people in life are dicks, people on the internet are even bigger dicks.

      Sorry sir but you are incorrect. See, there's three kinds of people: dicks, pussies, and assholes. Pussies think everyone can get along, and dicks just want to fuck all the time without thinking it through. But then you got your assholes, Chuck. And all the assholes want us to shit all over everything! So, pussies may get mad at dicks once in a while, because pussies get fucked by dicks. But dicks also fuck assholes, Chuck. And if they didn't fuck the assholes, you know what you'd get? You'd get your dick and your pussy all covered in shit!

    I still find it very curious that they didn't pull the plug much earlier. Did they not have anyone monitoring Tay as the tweets were coming out? Or did someone just decide "eh, let's see how things play out"


    This is why Artificial Intelligence will fail... either the developer does something stupid, or the executives push for it do something stupid... then the AI will come across as stupid (or worse).

    Most of these AIs being developed are neural nets, they generate emergent behaviour that is unpredictable. That emergent behaviour is going to create some bizarre and truely stupid things. That said, not knowing how people will react to it was dumb.

Join the discussion!

Trending Stories Right Now