LEARN GOOD MANNERS?
Bias and stereotypes must be rooted out for next-generation AI systems to become ethically acceptable
What happened when Microsoft launched ‘Tay’, a state-of-the-art artificial intelligence (AI) chatbot, designed to learn from its interactions with human beings via Twitter?
In less than 24 hours, it became a racist, misogynist, neo-Nazi. And its creators unceremoniously pulled the plug.
Shortly after the shutdown of Tay, Corporate Vice President of Microsoft Healthcare Peter Lee blogged: “We are deeply sorry for the unintended offensive and hurtful tweets... which do not represent who we are or what we stand for”.