Twitter Taught Microsoft’s AI Chatbot to Be a Racist Asshole in Less Than a Day
24th March 2016
It took less than 24 hours for Twitter to corrupt an innocent AI chatbot. Yesterday, Microsoft unveiled Tay — a Twitter bot that the company described as an experiment in “conversational understanding.” The more you chat with Tay, said Microsoft, the smarter it gets, learning to engage people through “casual and playful conversation.”
Unfortunately, the conversations didn’t stay playful for long. Pretty soon after Tay launched, people starting tweeting the bot with all sorts of misogynistic, racist, and Donald Trumpist remarks. And Tay — being essentially a robot parrot with an Internet connection — started repeating these sentiments back to users, proving correct that old programming adage: flaming garbage pile in, flaming garbage pile out.
Or, more accurately, ‘reality in, reality out’. Funny it is that, when reality conflicts with ‘progressive’ a priori worldview, somehow it’s reality’s fault for being All Wrong.