• Welcome to Tamil Brahmins forums.

    You are currently viewing our boards as a guest which gives you limited access to view most discussions and access our other features. By joining our Free Brahmin Community you will have access to post topics, communicate privately with other members (PM), respond to polls, upload content and access many other special features. Registration is fast, simple and absolutely free so please, join our community today!

    If you have any problems with the registration process or your account login, please contact contact us.

Save the driver or save the crowd? Scientists wonder how driverless cars will ‘choose

Status
Not open for further replies.

Lalit

Active member
[h=1]Save the driver or save the crowd? Scientists wonder how driverless cars will ‘choose’[/h]















Researchers who study human morality — and, its intersection with human psychology — have long noted that we are frustratingly inconsistent beings.
For instance, past research has suggested that people aren’t always consistent “utilitarians,” willing to promote the greatest good for the greatest number. Rather, research exploring multiple variants of the famous trolley dilemma — in which a speeding train is heading toward a large number of people, but stopping it would require that one person dies — finds that our utilitarianism tends to be very situational.
Now, new research suggests this matters not only to philosophical debates about ethics, but also when it comes to modern technological advances in a key space — autonomous vehicles, which are already being experimented with by Google and others.
These vehicles are widely expected to become vastly more prominent in transportation systems going forward, not only as personal vehicles, but also as taxis or even mass transit systems, in significant part because they will be safer.
But how will they deal with tough “moral” situations, which are likely to arise in rare but nonetheless controversial and high profile cases?
In two new articles in the journal Science Thursday, researchers explore this question — and they don’t find any easy answers.
“Experts say that 90 percent of accidents are avoidable by technology that basically will eliminate human error,” said Iyad Rahwan, a professor at MIT’s Media Lab who conducted one of the studies with Jean-François Bonnefon of the University of Toulouse Capitole in France and Azim Shariff of the University of Oregon.
“The other 10 percent are caused by less controllable things, like maybe bad weather conditions, or mechanical failures, or just kind of random freak accidents, that not even a very sophisticated computer can avoid,” he continued. “And it’s those minority of accidents that may lead to tradeoffs.”

Rahwan and his coauthors suggest that a proliferation of self-driving cars could do anything from ameliorate traffic problems to saving vast amounts of energy — and they are also forecast to be much safer, overall. Thus, they are expected to be able to save lives and lessen the 1.25 million annual road fatality deaths.
Nonetheless, these vehicles will occasionally have to “make difficult ethical decisions in cases that involve unavoidable harm,” they write. How they resolve those decisions, in turn, will depend on their programming — whose nature, these researchers believe, is likely to become a matter of significant public debate as the vehicles themselves become more common.
For instance, Rahwan explains that with an accident involving a driverless car, it will likely be possible to reconstruct what information the vehicle had and how it “chose” to do whatever led to the accident.
“So people are likely to demand to see those records in the case of an accident,” he said, “and once they do, they will scrutinize those choices.”
To try to help begin to grapple with such situations, Rahwan and his colleagues conducted a series of Mechanical Turk surveys to study how people feel about moral dilemmas involving self-driving vehicles. And overall, they found that people were generally pretty utilitarian in outlook, believing that autonomous vehicles should be programmed such that in a case where they have to sacrifice the driver’s life to save multiple lives (by running into a wall, say, rather than running into a large crowd), the larger number of lives is saved.
But we’re not always such good utilitarians. Indeed, the surveys found “the first hint of a social dilemma” when respondents were then asked how they felt about buying such a car, knowing that it had such programming, as opposed to buying a car whose programming instructs it to always save the driver’s life (even if that would lead to more deaths overall in an accident).
“Even though participants still agreed that utilitarian [autonomous vehicles] were the most moral, they preferred the self-protective model for themselves,” the researchers report.
Meanwhile, yet another survey conducted for the study found that people were particularly uncomfortable with the idea of the government mandating or legislating that autonomous vehicles make utilitarian “choices” in key instances — even though the prior surveys had shown that people generally approve of these utilitarian choices in the abstract.
Strikingly, in one survey question, 59 percent of respondents suggested they were likely to buy an autonomous vehicle if there was no government regulation of its moral “choices,” but just 21 percent were likely to buy the vehicle if there was such regulation.
The authors therefore worry that actually mandating that these vehicles contain utilitarian algorithms could block their widespread adoption and public acceptance. And this widespread adoption, they think, would still save a lot of lives, notwithstanding what happens in a few, relatively rare trolley dilemma type scenarios.
Granted, it is far from clear whether engineers who design self-driving cars will actually be giving them any explicit instructions on choices about dilemmas like these — rather, the vehicles’ situational “choices” might emerge from the combination of other different aspects of its complex programming, said Rahwan.
“Whether or not a programmer explicitly programs cars to do something, they will do something, and it will be implicit in the algorithm,” he said. “If we don’t have a discussion on this, then that assumption will be completely arbitrary.”


In an accompanying essay, meanwhile, Harvard moral psychologist Joshua Greene analyzes the study and remarks that “Before we can put our values into machines, we have to figure out how to make our values clear and consistent.”
“What’s interesting about this paper is that it not only measures an aspect of public opinion, but really highlights a deep inconsistency in ordinary people’s thinking about it,” said Greene in an interview. “To me what’s valuable here is drawing out that inconsistency … and saying, ‘Hey folks, we have to figure out, what are our values here, what trade-offs are we willing or unwilling to make.’ ”
The core moral divide here is actually a deep and persistent one, Greene says, and hardly exclusive to issues involving autonomous vehicles. Rather, the tension is over doing the right thing for society, as opposed to doing the right thing for one or a few individuals.
“Whether you’re talking about an arms race, or polluting the oceans, or any number of other things, it’s the same,” he says. “It’s the ‘me’ option versus the ‘us’ option.”
https://www.washingtonpost.com/news...tists-wonder-how-driverless-cars-will-choose/
 
Status
Not open for further replies.

Latest ads

Back
Top