The racist bot: a Microsoft story.​

THE BLOG

Maybe you've heard this story. Microsoft introduced a bot that will talk to people on the internet (Twitter) and evolve from their interaction with it, and very quickly discovered that they are the owners of a racist, women hating bot.

 

You can read one of the many pieces of coverage this story got here: http://www.thepoke.co.uk/2016/03/24/microsoft-accidentally-makes-racist-bot-delete-load-tweets/

 

What happened here is pretty clear. Just like with the story of "Boaty Mcboatface" (Google it if you have no idea what I'm talking about), the people of planet internet has proven themselves to be a bunch of idiots, once again. Just like the stupid joke of teaching a tourist who doesn't speak the local language to curse, pretending that it's all polite words, people probably thought it would be a good idea to show their true nature to a bot, designed to learn from them.

 

Now let's pay close attention to Microsoft response to this mayhem. 

 

“The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We’re making some adjustments to Tay.”

 

What happened here is a clear case of "falling for the masses". Everybody is super pissed off, so let's apologize, quickly!

 

But when you think about it, what does Microsoft have to apologize about? they weren't the ones to teach this bot to worship Hitler and hate Mexicans. They weren't the ones to teach it words like "whore". So what are they apologizing for?

 

The truth is, they are apologizing for upsetting the beast. The beast called internet. 

 

Now, please take a minute before you move on to read my opinion, and think of the following: if you were the person within Microsoft that is in charge of writing this comment, what would you have said? is there an alternative story that could be told here, or even better - worth telling?

 

In my opinion, there is. This bot has shown all, without the censorship that educated people might have wanted to force on it, exactly how humans sounds like in 2016. How violent we became to be. How full of hate. How shallow and disrespectful of anything that isn't us currently. 

 

Is this a story worth telling? "Hey guys, we understand your shock. We didn't see it coming too. We thought the commentators on YouTube and their racist, violent responses aren't the majority. Maybe we were naive, but we wanted this bot to be an instrument of good. As it turns out, learning from humans made it an instrument of evil. So we will re-calibrate it to teach us, instead. To remind us what it's like to be human, for real." 

 

I don't work for Microsoft, and didn't even try for the above to sound official. But isn't this a much interesting story? and more than that, a story where Microsoft takes the lead on something that is bothering more and more of us every day? Why should they cave in? because the same violent people this bot learned from has now turned against them?

 

The lesson here is simple. Crises will happen. Your story, your good intentions, your vision of how the public will react to your new "thing" - they can all fall apart in seconds. And when they will, don't hurry up to run and hide, don't just apologize because someone (or a lot of people) said that you're to blame. Think about the situation. Think about what you did wrong, if at all. Be honest with yourself first. And after that - find the story that portrays your take on what happened (a story is not a lie! the truth always offers more than one version. Always). Tell that story with conviction. And if possible, use it to open a discussion that is bigger than you or your fuck up. It may mean that the story will last longer - but the audience will appreciate you for it. And reputation like that doesn't fade away easily.

READY TO CRAFT YOUR STORY?

 

COPYRIGHT GIRIMAN 2015