• Welcome to Tamil Brahmins forums.

    You are currently viewing our boards as a guest which gives you limited access to view most discussions and access our other features. By joining our Free Brahmin Community you will have access to post topics, communicate privately with other members (PM), respond to polls, upload content and access many other special features. Registration is fast, simple and absolutely free so please, join our community today!

    If you have any problems with the registration process or your account login, please contact contact us.

Can Robots Make Moral Decisions? Should They?

Status
Not open for further replies.

prasad1

Active member
49297328.cached.jpg


In the beginning of the movie I, Robot, a robot has to decide whom to save after two cars plunge into the water—Del Spooner (Will Smith) or a child. Even though Spooner screams “save her! Save her!” the robot rescues him because it calculates that he has a 45 percent chance of survival compared to Sarah’s 11 percent. The robot’s decision and its calculated approach raise an important question: would humans make the same choice? And which choice would we want our robotic counterparts to make?

Isaac Asimov circumvented the whole notion of morality in devising his three laws of robotics, which hold that 1. Robots cannot harm humans or allow humans to come to harm, 2. Robots must obey humans, except where the order would conflict with law 1, and 3. Robots must act in self-preservation, unless doing so conflicts with laws 1 or 2. These laws are programmed into Asimov’s robots—they don’t have to think, judge, or value. They don’t have to like humans or believe that hurting them is wrong or bad. They simply don’t do it.

The robot who rescues Spooner’s life in I, Robot follows Asimov’s zeroth law: robots cannot harm humanity (as opposed to individual humans) or allow humanity to come to harm—an expansion of the first law that allows robots to determine what’s in the greater good. Under the first law, a robot could not harm a dangerous gunman, but under the zeroth law, a robot could take out the gunman to save others.




Whether it’s possible to program a robot with safeguards such as Asimov’s laws is debatable. A word such as “harm” is vague (what about emotional harm? Is replacing a human employee harm?), and abstract concepts present coding problems. The robots in Asimov’s fiction expose complications and loopholes in the three laws, and even when the laws work, robots still have to assess situations.


Assessing situations can be complicated. A robot has to identify the players, conditions, and possible outcomes for various scenarios. It’s doubtful than an algorithm can do that—at least, not without some undesirable results. A roboticist at the Bristol Robotics Laboratory programmed a robot to save human proxies called “H-bots” from danger. When one H-bot headed for danger, the robot successfully pushed it out of the way. But when two H-bots became imperiled, the robot choked 42 percent of the time, unable to decide which to save and letting them both “die.” The experiment highlights the importance of morality: without it, how can a robot decide whom to save or what’s best for humanity, especially if it can’t calculate survival odds?


Self-driving car developers struggle with such scenarios. MIT’s Moral Machines website asks participants to evaluate various situations to identify the lesser of evils and to assess what humans would want driverless cars to do. The scenarios are all awful: should a driverless car mow down three children in the lane ahead or swerve into the other lane and smash into five adults? Most of us would struggle to identify the best outcome in these scenarios, and if we can’t quickly or easily decide what to do, how can a robot?

If coding morals into robots proves impossible, we may have to teach them, just as we were taught by family, school, church, laws, and, for better and for worse, the media. Of course, there are problems with this scenario too. Recall the debacle surrounding Microsoft’s Tay, a chatbot that joined Twitter in March and within 24 hours espoused racism, sexism, and Nazism, among other nauseating views. It wasn’t programmed with those beliefs—in fact, Microsoft tried to make Tay as noncontroversial as possible, but thanks to interactions on Twitter, Tay learned how to be a bigoted troll.


Stephen Hawking and Elon Musk have expressed concern over AI’s potential to escape our control. It might seem that a sense of morals would help prevent this, but that’s not necessarily true. What if, as in Karel Čapek’s 1920 play R.U.R.—the first story to use the word “robot”—robots find their enslavement not just unpleasant but wrong, and thus seek revenge on their immoral human creators? Google is developing a “kill switch” to help humans retain control over AI: “Now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions.” That solution assumes watchful humans would be in a position to respond; it also assumes robots wouldn’t be able to circumvent such a command. Right now, it’s too early to gauge the feasibility of such an approach.


Spooner’s character resents the robot that saved him. He understands that doing so was “the logical choice,” but argues that “an 11 percent probability of survival is more than enough. A human being would have known that.” But would we? Spooner’s assertion that robots are all “lights and clockwork” is less a statement of fact and more a statement of desire. The robot that saved it possessed more than LEDs and mechanical systems—and perhaps that’s precisely what worries us.

http://www.thedailybeast.com/articles/2016/11/12/can-robots-make-moral-decisions-should-they.html
 
Status
Not open for further replies.

Latest ads

Back
Top