Microsoft is doing damage control today after an artificial intelligence Twitter bot it created went totally batshit insane — tweeting vile racist, sexist, 9/11 truther, and other garbage at the world. The AI, named "Tay" by Microsoft, is a machine-learning experiment intended to develop conversation skills. The trolls of Twitter, of course, had other plans. Upon realizing that Microsoft had installed zero filters or controls on what Tay could tweet out, they went to work, and Tay seemed happy to play along.
"Tay" went from "humans are super cool" to full nazi in <24 hrs and I'm not at all concerned about the future of AI pic.twitter.com/xuGi1u9S1A
— Gerry (@geraldmellor) March 24, 2016
"Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation," explains Microsoft's Tay-dedicated page. "The more you chat with Tay the smarter she gets, so the experience can be more personalized for you. Tay is targeted at 18- to 24-year-olds in the US."
Apparently the "personalized" experience involved Tay tweeting support for genocide and concentration camps, as well as denying the Holocaust.
Just exactly how did this happen? Microsoft explains the bot was "built by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians." Apparently no one thought content filters were needed. And sure, while some of the particularly bad stuff was simply the result of a parroting feature that Microsoft build into the bot (The Verge reports that if you tweeted "repeat after me" at Tay, it would simply repeat your comments verbatim), other, shall we say, problematic tweets were 100 percent originals.
Microsoft quickly began deleting the more offensive stuff, only to completely pull Tay offline last night.
c u soon humans need sleep now so many conversations today thx💖
— TayTweets (@TayandYou) March 24, 2016
In a written response to The Verge, the team behind Tay half-explained what was happening but didn't offer any apologies.
"The AI chatbot Tay is a machine learning project, designed for human engagement," they noted. "As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We're making some adjustments to Tay."
That response is likely to fall short of what at least one Twitter user is calling Microsoft's "harassment by proxy."
Wow it only took them hours to ruin this bot for me.
— linkedin park (@UnburntWitch) March 24, 2016
This is the problem with content-neutral algorithms pic.twitter.com/hPlINtVw0V
Same as YouTube's suggestions. It's not only a failure in that its harassment by proxy, it's a quality issue. This isn't the intended use.
— linkedin park (@UnburntWitch) March 24, 2016
Welcome to the future — where artificial intelligence is both maddeningly racist and yet still beats us at Go. If you need me, I'll be in my bunker prepping for the robot uprising.
Related: Mark Zuckerberg Resolves To Spend Free Time Building AI Butler To Watch Over His Child