Who will be killed in a traffic accident involving a self-driving car?

Who will be killed in a traffic accident involving a self-driving car?

According to Business Insider, experts say self-driving cars are already making profound choices about whose lives are more valuable, so people want to pay attention to their choices.

“Every time a self-driving car performs an evasive maneuver, it implicitly makes a decision about how dangerous different groups of people are,” Iyad Rahwan, a cognitive scientist at the Massachusetts Institute of Technology, wrote in an email.

In the discussion about the ethics of self-driving cars, the most famous question is the trolley problem, one of the most famous thought experiments in the field of ethics, which means who should be sacrificed in an unavoidable traffic accident. For example, if a pedestrian suddenly falls in front of a fast-moving self-driving car, the car can change direction and hit the roadblock, which may kill the passenger, or continue to move forward, which may crush the fallen pedestrian. What should the self-driving car do?

Lavan and his colleagues study how people make decisions when they don’t have a perfect plan. While the fatal scenario is currently hypothetical, Lavan and others in the industry say it’s inevitable as self-driving cars become commonplace.

According to Business Insider, experts say self-driving cars are already making profound choices about whose lives are more valuable, so people want to pay attention to their choices.

“Every time a self-driving car performs an evasive maneuver, it implicitly makes a decision about how dangerous different groups of people are,” Iyad Rahwan, a cognitive scientist at the Massachusetts Institute of Technology, wrote in an email.

In the discussion about the ethics of self-driving cars, the most famous question is the trolley problem, one of the most famous thought experiments in the field of ethics, which means who should be sacrificed in an unavoidable traffic accident. For example, if a pedestrian suddenly falls in front of a fast-moving self-driving car, the car can change direction and hit the roadblock, which may kill the passenger, or continue to move forward, which may crush the fallen pedestrian. What should the self-driving car do?

Lavan and his colleagues study how people make decisions when they don’t have a perfect plan. While the fatal scenario is currently hypothetical, Lavan and others in the industry say it’s inevitable as self-driving cars become commonplace.

There are also ethical questions that arise in everyday life, such as what a self-driving car should do when passing a cyclist or pedestrian?

“When you drive on the road, you create a danger to those around you,” said Ryan Jenkins, a philosophy professor at California Polytechnic State University. “When we drive past cyclists and joggers, we try to stay as far away from them as possible because we think it’s safer. Even if we are very confident that we will not collide with them, we think about the possibility that something unexpected will force us to swerve, or that a cyclist will fall off his bike or a jogger will slip into the road.”

Answering these questions is not easy. "To truly ensure pedestrian safety, autonomous vehicles should be driven very slowly when there are pedestrians on the sidewalk, in case a pedestrian steps into the traffic lane," Noah Goodall, a scientist at the Virginia Transportation Research Board, wrote in an email.

Human drivers can answer a variety of ethical questions intuitively, but this is not an easy question for artificial intelligence. Self-driving car programmers must develop clear rules for each situation or rely on general driving rules to solve the problem.

“On the one hand, the algorithms controlling a self-driving car may have a set of explicit rules for making ethical trade-offs,” Lavan wrote. “On the other hand, the decisions a self-driving car makes when harm is unavoidable may result from the interaction of multiple software elements, none of which is specifically responsible for handling ethical decisions.”

Business Insider said that even if programmers choose to keep it vague, there are still traces of self-driving car behavior in some cases or in general.

“To paraphrase renowned theologian Harvey Cox: having no choice is a choice,” Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence, wrote in an email.

How are self-driving car companies dealing with these ethical questions? In many cases, they are trying to sidestep them.

Despite the huge amount of attention the trolley problem has received, the self-driving car industry has largely refused to comment or has shrugged it off. A Daimler executive told Car and Driver this fall that Mercedes-Benz self-driving cars will do whatever it takes to keep their passengers safe. Daimler later denied the claim, saying, "It is clear that neither programmers nor self-driving systems have the right to decide the value of human life." Daimler also said the trolley problem is not a real problem because the company is "committed to avoiding dilemmas altogether by implementing risk-averse driving strategies."

Of course, ethicists will point out that some risks are unavoidable—brake failure, for example, or the sudden and unpredictable behavior of other drivers, cyclists, pedestrians, and animals—so it’s not pretentious to think that cars must make hard choices.

We can take Daimler’s claim that all lives are equal to mean that it doesn’t explicitly consider one group of people to be more valuable.

Google has disclosed more details than any other company about how its self-driving cars respond to traffic accidents.

In 2014, Sebastian Thrun, founder of Google's X division, said the company's self-driving cars would choose to collide with the smaller of two objects, "if a traffic accident is unavoidable, it will crash into the smaller object."

A 2014 Google patent related to lateral lane positioning follows similar logic, describing how self-driving cars should move away from trucks in one lane and closer to cars in another because it is safer to collide with smaller objects.

Of course, colliding with a smaller object is an ethical choice: Such a choice protects the interests of passengers and minimizes the harm they suffer. It can also be seen as shifting the danger to pedestrians or passengers of small cars. As Patrick Lin, a professor of philosophy at California Polytechnic State University, pointed out in an email, "The smaller object could be a stroller or a small child."

In March 2016, Chris Urmson, then head of Google’s self-driving car team, described a more “sophisticated” rule to the Los Angeles Times: “Our self-driving cars will do their best to avoid hitting unprotected road users: cyclists and pedestrians. They will also avoid hitting moving objects.”

This approach is more pragmatic, trying to protect the people who are most likely to be hurt in an accident than crashing into smaller objects, but it may also be less popular with self-driving car buyers, who will surely want their cars to keep them safe at all costs.

How should the ethical issues of self-driving cars be handled? The industry has reached at least one consensus: this issue needs more discussion.

"Manufacturers and others are working with regulators and other stakeholders, such as drivers, passengers, and vulnerable road users, to address these situations and ensure that such ethical judgments and decisions are made consciously," the National Highway Traffic Safety Administration said in a September report.

Wayne Simpson, an executive at Consumer Watchdog, a nonprofit consumer advocacy group, agreed. In testimony submitted to the National Highway Traffic Safety Administration, he said, "The public has a right to know whether an autonomous vehicle prioritizes the safety of passengers, drivers or pedestrians when it is driving on the street, and what factors it considers. If these issues are not resolved, companies will limit their own responsibilities when designing cars, rather than complying with social customs, ethical practices or relevant laws and regulations."

The self-driving car industry seems to be "receptive to advice." Apple, a well-known self-driving car company, said it would conduct a "thoughtful exploration and incorporate feedback from industry leaders, consumers, federal agencies and other experts."

Ford echoed that sentiment, saying it was already "working with several leading universities and industry partners" to address the ethics of self-driving cars.

At the same time, Ford also warned against excessive philosophical thinking. Wayne Bahr, Ford's global director of automotive safety, said, "We are trying to solve this problem from a professional perspective and avoid getting stuck in unrealistic assumptions. A common problem in discussions about the ethics of self-driving cars is that the basic assumptions about the capabilities of self-driving cars are wrong. For example, questioning the value of one person's life over that of others assumes that self-driving cars can distinguish which person has higher value."

Business Insider said that Bach's comments refer to other versions of the trolley problem: choosing which person to kill takes into account factors such as age, legal status, and social value. In the short term, self-driving cars are unlikely to be able to classify people based on these factors.

In the long run, the most ethical decision is to make the majority of cars on the road self-driving. After all, self-driving cars are far safer than human drivers and are expected to reduce traffic accidents by 90%.

But the widespread use of self-driving cars requires good laws that avoid missteps that could spark debate and litigation. In other words, it requires ethical standards.

“The sooner self-driving cars become commonplace, the fewer people will die in traffic accidents,” Rawan and colleagues Azim Shariff and Jean-François Bonnefon wrote in The New York Times. “But seriously addressing the psychological and technical challenges of self-driving cars will be necessary to free people from the need to drive.”

As a winner of Toutiao's Qingyun Plan and Baijiahao's Bai+ Plan, the 2019 Baidu Digital Author of the Year, the Baijiahao's Most Popular Author in the Technology Field, the 2019 Sogou Technology and Culture Author, and the 2021 Baijiahao Quarterly Influential Creator, he has won many awards, including the 2013 Sohu Best Industry Media Person, the 2015 China New Media Entrepreneurship Competition Beijing Third Place, the 2015 Guangmang Experience Award, the 2015 China New Media Entrepreneurship Competition Finals Third Place, and the 2018 Baidu Dynamic Annual Powerful Celebrity.

<<:  US expert: China's auto industry is developing synchronously in all fields

>>:  People's Daily published an article on new energy vehicle subsidy fraud: The hole must be plugged

Recommend

Router or TV box? JCG Hacker H1 Smart Home Gateway Review

To be honest, when JCG Hacker H1 first came out, ...

Where to release water in Wuhan at night

Wuhan high-end tea drinking is unique and very un...

Monkey stories: The ons and offs of black and white leaf monkeys

Primates are close relatives of humans. The most ...

Official first revelation: Does a bracelet tied to a dog count as WeChat steps?

Since WeChat launched the walking rankings, many f...

Efficient Posture Assessment Baidu Cloud Download

Introduction to effective posture assessment reso...

A mind map will help you understand the tricks of operating the points mall!

Points Mall Operation System (Mind Map of this Art...

A new methodology for brand creativity!

To be frank, after more than ten years of working...

Can aflatoxin really be detected using a flashlight?

There is a short video that says that aflatoxin, ...

My friend, you must have wasted more than half of your advertising budget...

A few days ago, I met a friend who worked on home...

After the bonus period, you should take this complete guide to channel operation

The traffic dividend has passed, so how will traf...