Is our society ready to let AI make decisions?

Is our society ready to let AI make decisions?

As technology advances at an accelerating pace, artificial intelligence (AI) is playing an increasingly important role in decision-making. Humans are increasingly relying on algorithms to process information, recommend certain actions, and even take actions on their behalf.

However, if AI is really allowed to help or even replace us in making decisions, especially those involving subjective, moral and ethical decisions, can you accept it?

Recently, a research team from Hiroshima University explored human reactions to the introduction of AI decision-making. Specifically, they studied the interaction between humans and self-driving cars and explored the question: "Is society ready to accept AI ethical decision-making?"

The team published their findings on May 6, 2022, in the Journal of Behavioral and Experimental Economics.

In the first experiment, the researchers presented 529 human subjects with a possible moral dilemma that a driver might face. In the scenario created by the researchers, a car driver had to decide whether to crash the car into one group of people or another, and the collision was unavoidable. That is, the accident would cause serious injuries to one group of people but would save the lives of another group.

The human subjects in the experiment had to rate the decisions of a car driver, who could be either a human or an AI, in order to measure the biases that people might have towards the ethical decision-making of AI.

(Source: Hiroshima University)

In their second experiment, 563 human subjects answered questions posed by the researchers to determine how people would react to ethical decisions about AI once it becomes a part of society.

In this experiment, there are two scenarios. One involves a hypothetical government that has decided to allow self-driving cars to make moral decisions; the other scenario allows subjects to "vote" on whether to allow self-driving cars to make moral decisions. In both cases, subjects can choose to support or oppose the decision made by the technology.

The second experiment was designed to test the effects of two alternative ways of introducing AI into society.

The researchers observed that when subjects were asked to evaluate the ethical decisions of either human or AI drivers, they had no clear preference for either. However, when subjects were asked for their explicit opinion on whether AI should be allowed to make ethical decisions on the road, subjects had stronger opinions for AI-driven cars.

(Source: Jonesday)

The researchers believe the difference between the two results is due to a combination of two factors.

The first element is that many people believe that society as a whole does not want AI to make ethical decisions, so when asked about their views on this matter, they will be influenced by their own ideas.

"In fact, when participants were explicitly asked to distinguish their answers from society's, the differences between the AI ​​and human drivers disappeared," said Johann Caro-Burnett, assistant professor at the Graduate School of Humanities and Social Sciences at Hiroshima University.

The second element is whether the results of allowing discussion of this topic when introducing this new technology into society vary from country to country. "In areas where people trust the government and have strong government institutions, information and decision-making power help the way subjects evaluate AI ethical decision-making. In contrast, in areas where people do not trust the government and have weak institutions, decision-making power worsens the way subjects evaluate AI ethical decision-making," said Caro-Burnett.

"We found that there is a social fear about AI ethical decision-making. However, the root of this fear is not inherent in the individual. In fact, this rejection of AI comes from what the individual considers to be society's view," said Shinji Kaneko, professor at the Graduate School of Humanities and Social Sciences at Hiroshima University.

Figure | On average, people's evaluations of AI drivers' ethical decisions were no different from those of human drivers. However, people do not want AI to make ethical decisions on the road (Source: Hiroshima University)

Thus, when not explicitly asked, people did not show any signs of bias toward AI ethical decision-making. However, when explicitly asked, people showed aversion to AI. Furthermore, with increased discussion and information on the topic, AI acceptance improved in developed countries, but worsened in developing countries.

The researchers believe that this rejection of new technologies, which is mainly due to individual beliefs about social opinions, is likely to apply to other machines and robots. "It is therefore important to determine how individual preferences aggregate into a social preference. Moreover, such conclusions must also differ between countries, as our results suggest," says Kaneko.

References:

https://www.hiroshima-u.ac.jp/en/news/71302

https://linkinghub.elsevier.com/retrieve/pii/S2214804322000556

<<:  Is it easier to gain weight in summer? 8 ways to stay away from fat

>>:  The incident happened in the early morning! The man lit a mosquito coil and almost got into trouble...

Recommend

Refined strategic operation of private domain traffic

Dear friends, the topic I want to share with you ...

9 key points of event planning!

This article will explain one by one how to carry...

What does the “M” after the mini program app.js mean?

Q: What does the M after the mini program js file...

Why is it bad if the title tag of a web page is too long?

Why is it bad if the title tag of a web page is t...

Using the Internet+ skillfully, a small singer can also become a big star

Before the Internet became popular, it was hard t...

Double 11 to grab traffic, Tencent’s information flow advertising guide!

The annual Double 11 promotion is approaching, an...

Accenture: China Digital Transformation Index 2023

Accenture released the "2023 China Digital T...