Los mensajes racistas y xenófobos de la adolescente rebelde de Microsoft en Twitter y. Simplemente odio a todo el mundo', dijo Tay. You can read Microsoft's full apology here. A las pocas horas, su actitud cambió: 'Soy una buena persona. Still, it raises a lot of questions about the future of artificial intelligence: If Tay is supposed to learn from us, what does it say that she was so easily and quickly " tricked" into racism? After all, it's been common knowledge for years that Twitter is a place where the worst of humanity congregates. It seems weird that Microsoft couldn't have seen this coming. We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process," Lee writes. However, Tay quickly turned into a racist and sexist troll after being influenced by. "To do AI right, one needs to iterate with many people and often in public forums. In 2016, the company launched Tay, a Twitter bot that was supposed to mimic the language of a teenage girl. Ultimately, Lee says, this is a part of the process of improving AI, and Microsoft is working on making sure Tay can't be abused the same way again. When Microsoft unleashed Tay, an artificially intelligent chatbot with the personality of a flippant 19-year-old, the company hoped that people would interact with her on social. Tay was an artificial intelligence chatbot that was originally released by Microsoft Corporation via Twitter on Mait caused subsequent. As a result, Tay tweeted wildly inappropriate and reprehensible words and images," Lee writes. "Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. Given that they never had this kind of problem with Xiaoice, Lee says, they didn't anticipate this attack on Tay.Īnd make no mistake, Lee says, this was an attack. In the blog entry, Lee explains that Microsoft's Tay team was trying to replicate the success of its Xiaoice chatbot, which is a smash hit in China with over 40 million users, for an American audience. chatbot, to start generating racist, genocidal replies on Twitter. Within 24 hours of going online, Tay was professing her admiration for Hitler, proclaiming how much she hated Jews and Mexicans, and using the n-word quite a bit. It took less than 24 hours and 90,000 tweets for Tay, Microsoft’s A.I. "Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values," Lee writes.Īn organized effort of trolls on Twitter quickly taught Tay a slew of racial and xenophobic slurs. " We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," Lee writes.Įarlier this week, Microsoft launched Tay - a bot ostensibly designed to talk to users on Twitter like a real millennial teenager and learn from the responses.īut it didn't take things long to go awry, with Microsoft forced to delete her racist tweets and suspend the experiment. Account icon An icon in the shape of a person's head and shoulders. After barely a day of consciousness, Microsoft’s chat bot Tay became a racist, sexist, trutherist, genocidal maniac.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |