Here's how Tay went wrong, and what Microsoft is doing about it

Reading time icon 3 min. read


Readers help support MSpoweruser. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more

tayai

Microsoft recently introduced an AI Chatbot called Tay. The chatbot was actually quite intelligent, but it made some offensive and racist tweets after a “coordinated” attack. Today, Peter Lee, the Corporate Vice President of Microsoft Research apologized for Tay’s tweets. However, he also shared how Tay went wrong, and what the company is planning to do about it.

Firstly, Peter revealed how the company developed Tay using extensive research:

As we developed Tay, we planned and implemented a lot of filtering and conducted extensive user studies with diverse user groups. We stress-tested Tay under a variety of conditions, specifically to make interacting with Tay a positive experience. Once we got comfortable with how Tay was interacting with users, we wanted to invite a broader group of people to engage with her. It’s through increased interaction where we expected to learn more and for the AI to get better and better.

Secondly, he talked about how Tay was taught to tweet the offensive comments. As you may know, Tay learns from the users, and some users used this to turn Tay into a racist. Peter stated:

The logical place for us to engage with a massive group of users was Twitter. Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time. We will take this lesson forward as well as those from our experiences in China, Japan and the U.S. Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay.

Lastly, Peter shared how Microsoft is planning to improve Tay. The company stated that it needs to improve its AI design, and do “everything possible” to limit exploits like this:

Looking ahead, we face some difficult – and yet exciting – research challenges in AI design. AI systems feed off of both positive and negative interactions with people. In that sense, the challenges are just as much social as they are technical. We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes. To do AI right, one needs to iterate with many people and often in public forums. We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process. We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity.

So when is Tay coming back? According to Peter Lee, Microsoft will bring Tay back online when the company is confident enough that the AI chatbot is able to workaround these technical exploits.

More about the topics: ai, microsoft, microsoft research, tay

Leave a Reply

Your email address will not be published. Required fields are marked *