Microsoft's intelligent chatbot Tay has quickly learned from users how to spread offensive content

Microsoft's AI chatbot causes scandal with racist rants

Microsoft’s artificial intelligence experiment called the Tay chatbot had to be aborted less than a day after its introduction after the tweeting bot launched a series of racist outbursts.

Mimicking the language patterns of a 19-year-old American girl, the bot was designed to interact with human users on Twitter and learn from that interaction. However, the experiment didn’t go as planned as users started feeding the programme anti-Semitic, sexist and other offensive content, which the bot happily absorbed.

Microsoft shut down Tay’s Twitter account on Thursday night and apologised for the tirade.

"We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," Peter Lee, Microsoft's vice president of research, wrote in a blogpost.

Tay’s offences included answering another Twitter user’s question as to whether Holocaust did happen by saying ‘It was made up’, to which the bot added a handclap icon. The bot also re-tweeted another user’s message stating that feminism is cancer.

Following the setback, Microsoft said it would revive Tay only if its engineers could find a way to prevent web users from influencing the chat bot in ways that undermine the company's principles and values.

"Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack," Lee wrote. "As a result, Tay tweeted wildly inappropriate and reprehensible words and images."

Tay is already the second experiment of this kind conducted by Microsoft. An earlier chat bot called XiaoIce, launched in China in 2014, has since developed a followership of 40 million. It was the positive experience with Xiaolce that prompted Microsoft to create Tay, to see whether the technology would also work for American teenagers.

However, Microsoft insisted the embarrassment won’t put the firm off further exploring AI for the purposes of entertainment.

"We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity," Lee wrote.

Recent articles

Info Message

Our sites use cookies to support some functionality, and to collect anonymous user data.

Learn more about IET cookies and how to control them

Close