Microsoft is doing damage control today after an artificial intelligence Twitter bot it created went totally batshit insane — tweeting vile racist, sexist, 9/11 truther, and other garbage at the world. The AI, named "Tay" by Microsoft, is a machine-learning experiment intended to develop conversation skills. The trolls of Twitter, of course, had other plans. Upon realizing that Microsoft had installed zero filters or controls on what Tay could tweet out, they went to work, and Tay seemed happy to play along.

"Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation," explains Microsoft's Tay-dedicated page. "The more you chat with Tay the smarter she gets, so the experience can be more personalized for you. Tay is targeted at 18- to 24-year-olds in the US."

Apparently the "personalized" experience involved Tay tweeting support for genocide and concentration camps, as well as denying the Holocaust.

Just exactly how did this happen? Microsoft explains the bot was "built by mining relevant public data and by using AI and editorial developed by a staff including improvisational comedians." Apparently no one thought content filters were needed. And sure, while some of the particularly bad stuff was simply the result of a parroting feature that Microsoft build into the bot (The Verge reports that if you tweeted "repeat after me" at Tay, it would simply repeat your comments verbatim), other, shall we say, problematic tweets were 100 percent originals.

Microsoft quickly began deleting the more offensive stuff, only to completely pull Tay offline last night.

In a written response to The Verge, the team behind Tay half-explained what was happening but didn't offer any apologies.

"The AI chatbot Tay is a machine learning project, designed for human engagement," they noted. "As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We're making some adjustments to Tay."

That response is likely to fall short of what at least one Twitter user is calling Microsoft's "harassment by proxy."

Welcome to the future — where artificial intelligence is both maddeningly racist and yet still beats us at Go. If you need me, I'll be in my bunker prepping for the robot uprising.

Related: Mark Zuckerberg Resolves To Spend Free Time Building AI Butler To Watch Over His Child