Microsoft pulls its AI chatbot after Twitter users taught it to be racist
1 min. read
Published on
Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more
Yesterday, we reported about Microsoft’s new AI chatbot, Tay. The AI chatbot learns new things as it gets to know new stuff on the internet. As expcted, some Twitter users taught it to be…racist:
"Tay" went from "humans are super cool" to full nazi in <24 hrs and I'm not at all concerned about the future of AI pic.twitter.com/xuGi1u9S1A
— gerry (@geraldmellor) March 24, 2016
…and this:
Tay isn’t replying to Direct Messages, either. When you send her a DM, it simply states:
“Brb getting my upgrades fancy at the lab today so ttyl!”
Apparently, Tay is currently sleeping:
https://twitter.com/TayandYou/status/712856578567839745
It is worth noting that Microsoft is deleting some of Tay’s racist tweets. The company is posisbly working on improving Tay, and hopefully, it will be back sometime soon.
When Microsoft realized what the Internet was teaching @TayandYou pic.twitter.com/tDSwSqAnbl
— SwiftOnSecurity (@SwiftOnSecurity) March 24, 2016
We have reached out to Microsoft for more information on this, and we will update the story when and if we hear back from them.
User forum
33 messages