• This forum contains old posts that have been closed. New threads and replies may not be made here. Please navigate to the relevant forum to create a new thread or post a reply.
  • Welcome to Tamil Brahmins forums.

    You are currently viewing our boards as a guest which gives you limited access to view most discussions and access our other features. By joining our Free Brahmin Community you will have access to post topics, communicate privately with other members (PM), respond to polls, upload content and access many other special features. Registration is fast, simple and absolutely free so please, join our community today!

    If you have any problems with the registration process or your account login, please contact contact us.

Microsoft is deleting its AI chatbot's incredibly racist tweets

Status
Not open for further replies.

tks

0
Source:
https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/

========================================
Microsoft's new AI chatbot went off the rails Wednesday, posting a deluge of incredibly racist messages in response to questions.
The tech company introduced "Tay" this week — a bot that responds to users' queries and emulates the casual, jokey speech patterns of a stereotypical millennial.


The aim was to "experiment with and conduct research on conversational understanding," with Tay able to learn from "her" conversations and get progressively "smarter."


But Tay proved a smash hit with racists, trolls, and online troublemakers, who persuaded Tay to blithely use racial slurs, defend white-supremacist propaganda, and even outright call for genocide.


Microsoft has now taken Tay offline for "upgrades," and it is deleting some of the worst tweets — though many still remain. It's important to note that Tay's racism is not a product of Microsoft or of Tay itself. Tay is simply a piece of software that is trying to learn how humans talk in a conversation. Tay doesn't even know it exists, or what racism is. The reason it spouted garbage is that racist humans on Twitter quickly spotted a vulnerability — that Tay didn't understand what it was talking about — and exploited it.


Nonetheless, it is hugely embarrassing for the company.


In one highly publicized tweet, which has since been deleted, Tay said: "bush did 9/11 and Hitler would have done a better job than the monkey we have now. donald trump is the only hope we've got." In another, responding to a question, she said, "ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism."

screen%20shot%202016-03-24%20at%209.56.11%20am.png



screen%20shot%202016-03-24%20at%2009.50.46.png


And here's the bot calling for genocide. (Note: In some — but not all — instances, people managed to have Tay say offensive comments by asking them to repeat them. This appears to be what happened here.)

screen_shot_2016-03-24_at_11_12_04.jpg

Tay also expressed agreement with the "Fourteen Words" — an infamous white-supremacist slogan.

screen%20shot%202016-03-24%20at%2011.55.42.png
 
Microsoft VP responds ..

Posted March 25, 2016 By Peter Lee - Corporate Vice President, Microsoft Research


=========================

As many of you know by now, on Wednesday we launched a chatbot called Tay. We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.


I want to share what we learned and how we’re taking these lessons forward.


For context, Tay was not the first artificial intelligence application we released into the online social world. In China, our XiaoIce chatbot is being used by some 40 million people, delighting with its stories and conversations. The great experience with XiaoIce led us to wonder: Would an AI like this be just as captivating in a radically different cultural environment? Tay – a chatbot created for 18- to 24- year-olds in the U.S. for entertainment purposes – is our first attempt to answer this question.


As we developed Tay, we planned and implemented a lot of filtering and conducted extensive user studies with diverse user groups. We stress-tested Tay under a variety of conditions, specifically to make interacting with Tay a positive experience. Once we got comfortable with how Tay was interacting with users, we wanted to invite a broader group of people to engage with her. It’s through increased interaction where we expected to learn more and for the AI to get better and better.


The logical place for us to engage with a massive group of users was Twitter. Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time. We will take this lesson forward as well as those from our experiences in China, Japan and the U.S. Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay.


Looking ahead, we face some difficult – and yet exciting – research challenges in AI design. AI systems feed off of both positive and negative interactions with people. In that sense, the challenges are just as much social as they are technical. We will do everything possible to limit technical exploits but also know we cannot fully predict all possible human interactive misuses without learning from mistakes. To do AI right, one needs to iterate with many people and often in public forums. We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process. We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity.
 
Do you think a Chat Bot allowed to learn from TB rants will do better ? :)
I hope so - will it do satvik speak?
 
Status
Not open for further replies.

Latest ads

Back
Top