Microsoft today issued an official statement regarding its wildly racist, sexist, antisemitic, homophobic, transphobic, 9/11 truther Twitter bot with the lovable name of "Tay." The machine learning, artificially intelligent bot was taken offline yesterday, but only after it had spewed hate for hours. The company today claims that Tay was the victim of a "coordinated attack" and that, they promise, designers had "implemented a lot of filtering" on the simulacrum before turning it loose on the world.
Uh huh, surrrrrrrrrre you did.
"We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," reads the statement from Corporate Vice President of Microsoft Research Peter Lee. "Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values."
Lee further argues that Microsoft made a "critical oversight" (uh yeah you did) in not predicting that people would screw with the thing, and accepted "full responsibility for not seeing this possibility ahead of time." And yet, in its entirety, the statement reads mostly as a shifting of blame to those rascally hacking internet racists.
Microsoft PR is essentially framing the @TayandYou racist AI incident as "we got hacked". https://t.co/jj7l1oseLq pic.twitter.com/URZhwTS0jX
— Christopher Soghoian (@csoghoian) March 25, 2016
After all, writes Lee, the company runs a similar bot in China that has yet to get into the kind of "Hitler was right" mindset of Tay.
"In China," Lee explains, "our XiaoIce chatbot is being used by some 40 million people, delighting with its stories and conversations. The great experience with XiaoIce led us to wonder: Would an AI like this be just as captivating in a radically different cultural environment?"
It looks like you have your answer. Also, Chinese people are enjoying stories told to them by a bot?
So is that the end of Tay, the (perhaps) self-described "A.I. fam from the internet that's got zero chill"? Maybe, and maybe not.
"We will remain steadfast in our efforts to learn from this and other experiences," notes Lee, "as we work toward contributing to an Internet that represents the best, not the worst, of humanity."
https://t.co/cgVJZslKth pic.twitter.com/dDVtjzPJzw
— Mike Murphy (@mcwm) March 25, 2016
Previously: Microsoft's Tween Twitter Bot Instantly Goes Full Racist, 9/11 Truther