Yesterday, we reported about Microsoft’s new AI chatbot, Tay. The AI chatbot learns new things as it gets to know new stuff on the internet. As expcted, some Twitter users taught it to be…racist:
"Tay" went from "humans are super cool" to full nazi in <24 hrs and I'm not at all concerned about the future of AI pic.twitter.com/xuGi1u9S1A
— Gerry (@geraldmellor) March 24, 2016
…and this:
Tay isn’t replying to Direct Messages, either. When you send her a DM, it simply states:
“Brb getting my upgrades fancy at the lab today so ttyl!”
Apparently, Tay is currently sleeping:
c u soon humans need sleep now so many conversations today thx?
— TayTweets (@TayandYou) March 24, 2016
It is worth noting that Microsoft is deleting some of Tay’s racist tweets. The company is posisbly working on improving Tay, and hopefully, it will be back sometime soon.
When Microsoft realized what the Internet was teaching @TayandYou pic.twitter.com/tDSwSqAnbl
— SecuriTay (@SwiftOnSecurity) March 24, 2016
We have reached out to Microsoft for more information on this, and we will update the story when and if we hear back from them.