Soul-searching question: How can humans make robots "moral"?

Soul-searching question: How can humans make robots "moral"?

With the rapid development of artificial intelligence (AI) technology, many automated systems that do not require human control have sprung up, greatly enriching and improving people's material and cultural life. Of course, this technology will inevitably enter the military field, giving rise to a variety of automated weapons and equipment, such as unmanned aircraft, ships, tanks, and humanoid super-intelligent combat robots. After activation, these weapons and equipment can automatically select targets and take actions without human intervention, which cannot but cause people's concerns - they will indiscriminately kill innocent people, thereby changing the nature of war and causing serious harm to mankind.

So, a question arises: Is there any way to make robots abide by human morals?

Controversy over “killer robots”

The United States was the first country to develop and use this type of automated weapon (commonly known as "killer robots"), followed by the United Kingdom, Japan, Germany, Russia, South Korea and Israel, which also developed their own systems. The largest number of them are drones in the United States.

Israel's Harpy drone

It is reported that the US military currently has 7,000 drones of various types, ranging from palm-sized micro-machines to the "Global Hawk" giant machine with a wingspan exceeding that of a Boeing 737. They are generally equipped with cameras and image processors and can be sent to destroy targets without any communication and control systems. It is reported that the US military has used drones in large numbers in Afghanistan, causing many innocent casualties.

The United States' large unmanned reconnaissance aircraft RQ-4 Global Hawk

In March 2020, a Turkish-made suicide drone attacked a soldier in the battlefield of the Libyan civil war even in fully autonomous mode. The relevant report released by the UN Security Council shocked international public opinion and triggered profound moral and legal debates. Supporters believe that battlefield robots, if used properly, can effectively reduce the casualties of soldiers and save lives. However, the general view in the scientific community is that battlefield robots are a threat, which does not only mean that they have autonomous capabilities, but also that they have the power of life and death. For this reason, the United Nations issued a document calling for the advent of "killer robots" to shake the international convention on the conduct of war, and the research and development of related technologies must be "frozen". The ultimate goal is to permanently ban the use of such weapons and not give the power of life and death to a robot.

An interesting experiment

The reason for the United Nations' appeal is that robots cannot be held responsible for their own actions - they can judge whether the other party is a human or a machine, but it is almost impossible for them to identify whether the other party is a soldier or a civilian. Soldiers can use common sense to judge whether the woman in front of them is pregnant or hiding explosives, but robots are unlikely to do so. If terrorists use "killer robots" to replace suicide attacks, it would be like opening a modern "Pandora's box", with disastrous consequences.

So, is there a way to program robots to have "moral values" and "legal awareness" and become "good people" with a benevolent heart? In other words, is it possible to limit the actions of robots to a legal scope?

This is an interesting question that has inspired a group of top scientists, and some have conducted actual experiments. For example, Alan Winfield and his colleagues at the Bristol Robotics Laboratory in the UK designed a program for a robot to prevent other anthropomorphic robots from falling into a hole. This is based on the "Three Laws of Robotics" proposed by Isaac Asimov, a famous American science fiction writer.

At the beginning of the experiment, the robot was able to complete the task smoothly: when the humanoid robot moved toward the hole, the robot would rush up and push it away to prevent it from falling into the hole. Later, the researchers added another humanoid robot, and the two humanoid robots moved toward the hole at the same time, and the rescue robot was forced to make a choice. Sometimes, it successfully rescued one of the "people", while the other "person" fell into the hole; a few times, it even wanted to save both "people". However, in 33 tests, it wasted time 14 times because it was at a loss as to which one to save, causing both "people" to fall into the hole. After the public display of this experiment, it received great attention from the world. Afterwards, Winfield said that although the rescue robot can rescue others according to the set program, it does not understand the reasons behind this behavior at all, and is actually a "moral zombie."

As robots become more integrated into people's lives, such questions need to have a clear answer. For example, a self-driving car may one day have to balance "ensuring the safety of people in the car" and "causing harm to other drivers or pedestrians." Solving this problem may be very difficult. However, the good news is that some robots may be able to provide a preliminary answer: Ronald Akin, a computer expert at the Georgia Institute of Technology in the United States, has developed a set of algorithms specifically for a military robot called "Moral Supervisor". In simulated combat tests, it can help robots make wise decisions on the battlefield. For example, shooting is not allowed near protected targets such as schools and hospitals, and casualties should be minimized. However, this is a relatively simple situation. If you encounter complex situations, the results are often very different. It seems that answering the question of whether robots can make moral and ethical choices is not something that can be done in a short period of time. This involves a series of extremely cutting-edge and complex artificial intelligence technologies, and there is still a long way to go to make substantial progress.

The difficulty of making robots "moral"

This is a major problem encountered in the development of artificial intelligence technology. To correctly answer and solve this problem, we must first clarify a key point: Is it artificial intelligence itself or humans that need to master ethical standards?

More than a century ago, some people warned that the invention of electricity, especially the telegraph, would have an adverse effect on human intelligence. Since people would constantly use telecommunications to transmit information and had little time to think, their thinking ability would be weakened, and even eventually their brains would become paralyzed. More than 100 years later, some people issued similar warnings to Google. When they sometimes saw that social networks were full of nonsense, they mistakenly believed that the "weakening of thinking ability" mentioned above had become a reality. However, even if the above situation is true, it is unfair to attribute the cause to the development of electricity. Because it is obviously not the electricity technology that should be primarily responsible for the large amount of nonsense, but the nature of the people as actors.

Today, the development of artificial intelligence is facing a similar situation: on the one hand, people highly appreciate the unparalleled and huge driving force it has brought to all walks of life, while on the other hand, they are worried about some of its possible drawbacks. The emergence of "killer robots" is a typical example.

The great power of artificial intelligence mainly works through its excellent Turing recognition mode. For example, in the context of the COVID-19 pandemic, it can identify asymptomatic infections to avoid the spread of the disease; it can identify and classify endangered animals by the spots on their fur, which is conducive to monitoring them and thus reducing the risk of species extinction; it can also identify traces of ancient texts and determine the number of authors; it can even judge abnormal behavior in the examination room through intelligent algorithms and identify cheaters...

However, despite this, artificial intelligence technology is not omnipotent. When its development gradually touches on issues at the level of human consciousness, emotion, will and morality, it will pale in comparison, and sometimes even be completely powerless. This is because the laws of war have a history of thousands of years, and humans are often influenced by emotions and other factors and break the rules, but robots will not. This situation has become a part related to philosophy and ethics. Therefore, to solve the moral and ethical issues in artificial intelligence, we cannot simply look at it from a technical perspective, but must consider philosophical and ethical factors, which will inevitably increase the difficulty of solving the problem.

How to make robots abide by human morality

This difficulty can be summarized into two aspects: First, it is difficult to deal with some abstract concepts, such as the concept of "causing harm". Killing is obviously a kind of harm, but vaccination can also cause pain, and in some cases even cause death. So, to what extent can it be defined as "harm"? Second, it comes from the assessment of possible harm and the need to avoid harming humans. If an intelligent system encounters two targets with the same potential for harm and is at a loss, and does not know which target to lock on, it cannot be considered a mature system.

Although this problem is very difficult to solve and is even considered "unsolvable", scientists have not slowed down their pace of exploring the unknown and overcoming difficulties. They have put a lot of thought into how to make robots abide by human morality and have put forward many ideas, suggestions and plans, some of which are already being tested. In summary:

First, referring to the "Three Laws of Robotics" proposed by science fiction master Isaac Asimov, strict regulations stipulate that intelligent robots cannot harm humans, and cannot cause humans to suffer harm due to inaction; if violated, the relevant R&D personnel or institutions will be punished.

Second, take the view of appealing to the majority. Generally speaking, the interests of the majority often determine what is right and what is wrong. For example, suppose that in an accident involving pedestrians, which occurs immediately and is irreversible, we must decide what the autonomous vehicle should do: save the passengers or the pedestrians? The United States once used a moral machine to analyze these possible decisions and concluded that if an action can bring the greatest happiness to the greatest number of people, then it is a good thing worth doing. Here, happiness is a major expected goal.

Third, on the battlefield of the future, the real dilemma will arise when facing future soldiers. For future soldiers to be combat-ready, they must eliminate the perceived threat before it eliminates them. This can be achieved by programming the robot to follow some "moral rules", especially when it comes to soldiers in combat environments. But commanders may find that if the "moral" restrictions built into the robot will endanger the combatants of their own troops, this programming will interfere with the execution of the mission. For example, an artificial intelligence system will lock a soldier's gun to prevent the killing of a civilian who is actually a combatant or terrorist on the other side. From this perspective, in urban terrorist incidents, it seems impossible to figure out who is a civilian and who is not. Therefore, programming here is an extremely complex technology that requires top experts from all aspects to brainstorm and work together to make a breakthrough.

In summary, whether robots can be subject to moral constraints is a big goal. The fundamental way to achieve this goal is not how to make artificial intelligence technology conform to ethical standards, but that humans must abide by ethical standards when developing and using artificial intelligence. If ethical robots are still a myth, then ethical people are not at all. Therefore, it is entirely possible and absolutely necessary to urge scientists and military commanders around the world to take ethical actions, that is, to use automated weapons only when necessary when their own safety is harmed. In this regard, humans can do a lot. As Alan Mathison Turing, the father of artificial intelligence, said: "We can only look ahead a short distance, but we can see that there is a lot of work to be done there." Therefore, the prospects are bright.

The United States' Phalanx automatic weapon system

By Wang Ruiliang

<<:  The childhood snack "shredded figs" has nothing to do with figs?

>>:  Moutai has officially announced another product, this time it’s Dove! Do you want to buy two chocolates with liquor filling for 35 yuan?

Recommend

Get serious "C4D case comprehensive teaching"

This course provides comprehensive teaching of C4D...

Sailfish OS cuts off its Russian presence due to Ukraine situation

As the situation between Russia and Ukraine evolv...

iOS 9 public beta launches smart prediction/power saving mode

[[139793]] In the early morning of July 10th, Bei...

As an operator, how do you understand user anxiety?

When operators take advantage of users’ anxiety, ...

2018 China New Media Trend Report! (Attached with download)

Growth is an eternal topic Half is fire, half is ...

What would happen if you never took a shower?

appendix: If you don’t like taking showers, you m...

Compared with Americans, Chinese people prefer Android more?

The market share of iOS and Android systems is ob...

De Ge's Stock Discussion: Thirteen Short-term Hot Money Lessons

De Ge's Stock Discussion: Thirteen Short-term...

The Mystery of Gigantopithecus and Humans in Lingnan's Prehistoric History

Lingnan entered the Cenozoic Paleocene, and life ...

Windows 10 is frustrated. Can Cortana take on the future of Microsoft?

[[135660]] In March, Microsoft reached a cooperat...