Friday, March 25, 2016

Tay, the neo-Nazi millennial chatbot, gets autopsied

A user told Tay to tweet Trump propaganda; she did (though the tweet has now been deleted).

Microsoft has apologized for the conduct of its racist, abusive machine learning chatbot, Tay. The bot, which was supposed to mimic conversation with a 19-year-old woman over Twitter, Kik, and GroupMe, was turned off less than 24 hours after going online because she started promoting Nazi ideology and harassing other Twitter users.

The company appears to have been caught off-guard by her behavior. A similar bot, named XiaoIce, has been in operation in China since late 2014. XiaoIce has had more than 40 million conversations apparently without major incident. Microsoft wanted to see if it could achieve similar success in a different cultural environment, and so Tay was born.

Unfortunately, the Tay experience was rather different. Although many early interactions were harmless, the quirks of the bot's behavior were quickly capitalized on. One of its capabilities was that it could be directed to repeat things that you say to it. This was trivially exploited to put words into the bot's mouth, and it was used to promote Nazism and attack (mostly female) users on Twitter.

Read 10 remaining paragraphs | Comments

No comments:

Post a Comment