Microsoft’s AI & social media fail

By April 18, 2016General

The folks at Microsoft were left red-faced recently after an attempt to engage millennials with artificial intelligence backfired in spectacular fashion – just hours after the launch.

The tech company designed the chatbot, which it named “Tay,” as an experiment to see how AI programs can get “smarter’ after engaging with internet users in casual conversation.

Tay’s target peer group was “young millennials” between the ages of 18 and 24, and it was programmed to respond to their queries, emulating the language and speech patterns of a stereotypical millennial.

Unfortunately, the experiment got hijacked by online trolls, racists and hackers, who turned Tay from a lovable, sweet character, into a racist and bigot – programming her to spurt a deluge of racist, anti-Semitic, and hateful messages in response to questions on Twitter.

Response

The incident is particularly embarrassing as it happened to one of the most powerful companies in the world.

With a little foresight, they could have taken some precautionary steps to stop Tay behaving in the way she did – from creating a blacklist of terms; to simply manually moderating her responses.

Microsoft was forced onto the back foot, issuing a series of apologies which escalated from statements to news websites, to a full-blown blog post by Peter Lee, the corporate vice president of Microsoft Research.

Needless to say, the company also took Tay offline, and is working on an upgraded model.

Opinion

Tay issued thousands of tweets in the time she was active — about 4,000 an hour — and many of them were silly and cute, but the interest came from the offensive material and unfortunately, that is what @TayandYou will be remembered for.

The idea behind Tay was good. Teaching an artificial intelligence is hard, but exposing it to a collective consciousness, like social media, can help it learn new things. It’s also important to note that Tay’s racism is not a product of Microsoft or of Tay itself.

Tay is simply a piece of software that is trying to learn how humans talk in a conversation. Tay doesn’t even know it exists, or what racism is. It spouted garbage because racist humans on Twitter quickly spotted a vulnerability — that Tay didn’t understand what it was talking about — and exploited it.

Nonetheless, it is hugely embarrassing for the company.

Hotcow is a non-traditional creative agency that specialises in experiential marketing that goes viral. Our campaigns generate buzz through crowd participation, PR and content sharing. Contact us on 0207 5030442 or email us on info@hotcow.co.uk.