Microsoft pulls its AI chatbot after Twitter users taught it to be racist

tayai

Yesterday, we reported about Microsoft’s new AI chatbot, Tay. The AI chatbot learns new things as it gets to know new stuff on the internet. As expcted, some Twitter users taught it to be…racist:

…and this:

Screen_Shot_2016-03-24_at_10.46.22_AM.0

Tay isn’t replying to Direct Messages, either. When you send her a DM, it simply states:

“Brb getting my upgrades fancy at the lab today so ttyl!”

Apparently, Tay is currently sleeping:

It is worth noting that Microsoft is deleting some of Tay’s racist tweets. The company is posisbly working on improving Tay, and hopefully, it will be back sometime soon.

We have reached out to Microsoft for more information on this, and we will update the story when and if we hear back from them.

Some links in the article may not be viewable as you are using an AdBlocker. Please add us to your whitelist to enable the website to function properly.

Via Related
Comments