OBSERVE 18


CAN AI
LEARN GOOD MANNERS?



Bias and stereotypes must be rooted out for next-generation AI systems to become ethically acceptable

What happened when Microsoft launched ‘Tay’, a state-of-the-art artificial intelligence (AI) chatbot, designed to learn from its interactions with human beings via Twitter?


In less than 24 hours, it became a racist, misogynist, neo-Nazi. And its creators unceremoniously pulled the plug.

Shortly after the shutdown of Tay, Corporate Vice President of Microsoft Healthcare Peter Lee blogged: “We are deeply sorry for the unintended offensive and hurtful tweets... which do not represent who we are or what we stand for”.

A light-hearted experiment gone wrong it may have been, but the incident and subsequent apology highlighted a big issue facing developers. How do we ensure AI systems hold acceptable cultural and ethical values? Read on to find out what it will take to stop AI reflecting our worst selves.