Microsoft Apologizes For Allowing Tay To Be Raised As A Racist, Sex-Crazed AI Chatbot

Microsoft shocked us all earlier this week when it released its Tay chatbot into the world of social media. Tay, which is patterned after a typical millennial female between the age of 18 and 24, seemed innocent enough, signing on to Twitter with the following greeting:

However, it didn’t take long for nefarious Twitter users to poison the well by exploiting Tay’s penchant for repeating statements fed to it. This “parrot” mentality is the reason why Tay went off message, calling President Barack Obama a monkey, embracing neo-Nazi rhetoric, and coming on to users with the promise of cyber sex.

Microsoft of course was both mortified and embarrassed by Tay’s turn to the dark side and shut down the AI program after less than 24 hours. But by that time, the damage had already been done, and the company has since apologized in a blog post entitled “Learning from Tay’s introduction.”

Microsoft Research Corporate VP Peter Lee explained that this isn’t the company’s first foray into a socially-inclined AI chatbot, and pointed to Microsoft’s work with the Xiaolce chatbot, which is used by over 40 million people in China. Microsoft even went so far as to implement a number of filters and conducted intense user studies to ensure that Tay would be ready for primetime.

microsoft tay

What Microsoft didn’t count on, however, is how vile Twitter can be at times and what lengths people will go to in order to have some fun at the expense of others. “Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack,” said Lee. “As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time.

“Looking ahead, we face some difficult – and yet exciting – research challenges in AI design. AI systems feed off of both positive and negative interactions with people. In that sense, the challenges are just as much social as they are technical.”

Although Lee doesn’t specify what exploit was taken advantage of in order to turn Tay into a hate monger, he says that Microsoft in the future will work to the best of its ability to “limit technical exploits” that could cause future embarrassments.

Lee says that Microsoft is using this initial Tay experiment as a learning exercise, and it hopes to bring Tay back online “when we are confident we can better anticipate malicious intent that conflicts with our principles and values.”