AI autonomous weapons are here, our generation’s Oppenheimer moment is here

AI autonomous weapons are here, our generation’s Oppenheimer moment is here

In the conflict between Russia and Ukraine, a video that recently broke out has attracted attention: Ukrainian drones penetrated deep into Russian territory more than 1,000 kilometers from the border and destroyed Russian oil and gas infrastructure. Experts analyzed that it was likely that artificial intelligence was directing the drones to attack targets. The Ukrainian military commander also said that Ukraine "has carried out completely robotic operations without human intervention."

That is to say, this type of AI weapon system can autonomously select and attack targets without human intervention. In fact, the application of AI in war is much simpler than we think: existing AI and robotics technology can fully provide a suitable physical platform, and the technical difficulty of such systems to find human targets and kill them is even easier than developing self-driving cars, and a graduate student can handle the project.

Therefore, some observers worry that in the future, any faction may send swarms of cheap artificial intelligence drones to eliminate specific people using facial recognition technology. What's more, if we hand over the kill button to the machine, it will completely change the nature of war. Therefore, lethal autonomous weapon systems are considered to be the third revolution in warfare after gunpowder and nuclear weapons.

“I believe this is the Oppenheimer moment of our generation,” Austrian Federal Minister for European and International Affairs Alexander Schallenberg stressed at the first international conference on autonomous weapon systems in Vienna on April 29.

Written by | Ren

War is like a shadow that human civilization can never get rid of. Today, this bloody game, which has always been played by humans, is being taken over by artificial intelligence (AI).

Whether it is the Russia-Ukraine conflict or the wars in the Middle East, led by the Israel-Palestine conflict, the shadow of AI is vaguely visible: it (under human supervision) controls drones, missiles, guns, mines and other weapons, identifies targets and launches attacks.

This is the first time we have deeply experienced the two sides of AI, which can benefit mankind, but can also take human lives in an instant. Unfortunately, we have no time to mourn the dead. In the military field, this scale of AI application is just the beginning.

What we are about to see is an AI arms race slowly unfolding, where weapons will become increasingly intelligent and autonomous, and humans will slowly withdraw from their decision-making processes until lethal autonomous weapons systems (LAWs) emerge and are deployed on the battlefield.

We are not ready to face all this. When one day AI weapons can attack autonomously, when the life and death of countless people can be determined by just one line of code, can you accept that your fate will be decided by a cold algorithm?

It should be clear to everyone that letting an algorithm decide whether a person lives or dies is a terrible idea. Fortunately, we still have time to prevent this from happening; unfortunately, time is running out.

Remote operation vs autonomous decision making

According to the definition of the US Department of Defense, the levels of weapon autonomy can be divided into three types: "human in the loop", "human on the loop" and "human out of the loop", depending on the role played by humans:

With humans in the loop, the actions of the weapons are entirely decided and controlled by humans;

With humans in the loop, weapons can make autonomous decisions and act according to instructions after being activated, and humans can supervise and intervene at any time to take over decision-making and control rights as needed;

People are outside the loop, and after the weapon is activated, they will be assigned an action target, and then they will make decisions and carry out actions completely independently.

The good news is that, as far as we know, no true “lethal autonomous weapons systems” have been deployed. According to the United Nations, they are “weapons that locate, select and attack human targets without human supervision.”

We are not talking about weapons that are remotely controlled by humans, such as the famous American "Predator" drone, or the homemade bomb-dropping drones made by Ukrainian frontline soldiers, which can throw grenades remotely under human control.

As people's understanding of these weapons deepens, and electronic jamming technology advances, their effectiveness will become weaker and weaker. Therefore, some countries and companies have turned their attention to the development of autonomous weapons such as drones.

The U.S. Defense Advanced Research Projects Agency (DARPA) has announced two programs that reveal the planned uses of its autonomous weapon systems: Fast Lightweight Autonomy (FLA) and Collaborative Operations in Interference Environments (CODE). The former will program micro-rotor aircraft to fly at high speeds in urban areas and buildings without human assistance. The goal of the latter is to develop autonomous aircraft teams that perform all steps of the strike mission - discovery, identification, tracking, targeting, engagement, and evaluation - when enemy signal interference makes it impossible to communicate with human commanders.

Other countries have also developed related weapons. Since 2017, a Turkish company called STM has been selling the Kargu drone, which is the size of a dinner plate and can carry 1 kilogram of explosives. According to the company's 2019 website description, the drone is capable of "autonomously and precisely" striking vehicles and personnel, "selecting targets on images" and "tracking moving targets."

Israel's Harpy drone can fly over an area for hours, looking for targets that match visual or radar signatures, then destroying them with explosives. The U.S. Navy and the Defense Advanced Research Projects Agency have also developed the Sea Hunter unmanned warship, which can perform tasks such as anti-submarine tracking.

Around the world, experimental submarines, tanks and ships have been built, all of which can use AI for automatic driving and shooting. According to the information disclosed so far, these weapons can be remotely controlled or perform tasks autonomously after activation. It is difficult to tell to what extent humans are involved in their decision-making process.

AI applies for battle

Autonomous weapons of some kind have existed for at least a few decades, including heat-seeking missiles and even pressure-activated mines dating back to the American Civil War (if you define autonomy loosely). Now, however, the development and use of AI algorithms is expanding their capabilities.

In fact, the application of AI in war is much simpler than we think.

“The technical ability for a system to find humans and kill them is much easier than developing a self-driving car,” said Stuart Russell, a computer scientist at the University of California, Berkeley, and a prominent activist against AI weapons. “This is a graduate student project.”

Existing AI and robotics technologies can fully provide the right physical platform, perception, motor control, navigation, mapping, tactical decision-making, and long-term planning. All weapons manufacturers have to do is combine them as needed. For example, using DeepMind's DQN algorithm to learn human tactical strategies, combined with existing autonomous driving technology, an autonomous weapon system can fully complete urban search and destroy missions.

It’s hard to know how AI weapons will perform on the battlefield, in large part because the military doesn’t readily release such data.

In 2023, Tom Copinger-Symes, deputy commander of the UK Strategic Command, was directly asked about AI weapon systems during a UK parliamentary investigation. He said it was not convenient to disclose too much information, but only said that the British military was conducting benchmark studies to compare autonomous weapon systems with non-autonomous systems.

Although there is little real-world battlefield data, researchers point out that AI has superior processing and decision-making capabilities, which can theoretically provide huge advantages. For example, in the field of rapid image recognition, algorithms have been outperforming humans for nearly a decade. This paves the way for it to accurately identify targets in images.

As early as 2020, the AI ​​model defeated experienced F-16 fighter pilots in a series of simulated air battles, mainly due to its "aggressive and precise manipulation that human pilots cannot match." You know, AI can make very complex decisions about how to perform specific actions, flight distance from opponents, and attack angles in a very short time.

In April this year, the US military announced the completion of a groundbreaking experiment in which an AI-controlled F-16 fighter jet completed a real aerial combat with a human-piloted fighter jet. The AI-controlled aircraft completed offensive and defensive drills, and the closest distance to the human-piloted fighter jet was only 2,000 feet.

The modified F-16 fighter jet, known as the X-62A, is piloted by artificial intelligence.

Source: Data

This marks a breakthrough for the U.S. military in the field of autonomous combat of AI-controlled aircraft. If this technology appears on the battlefield in the future, the ever-changing air combat is difficult to control remotely, so it will inevitably fall into the category of lethal autonomous weapons.

In theory, AI can also be used in other aspects of warfare, including developing lists of potential targets. Earlier media reports said Israel used AI to create a database containing the names of tens of thousands of suspected militants, but the Israeli military denied it.

Controlling AI weapons is difficult

The power of AI forces people to think about how to better control its use, both from a practical application perspective and from an ethical perspective.

Human efforts to control and regulate the use of weapons date back hundreds of years. For example, disciplined medieval knights agreed not to aim their lances at each other's horses. In 1675, France and the Holy Roman Empire agreed to ban the use of poison bullets.

Today, the main international restriction on weapons is through the United Nations Convention on Certain Conventional Weapons (CCW), signed in 1983, which bans blinding laser weapons, among other things.

Researchers have been working for years to control the emerging threat posed by lethal autonomous weapons systems, and U.N. Secretary-General Antonio Guterres said last July he wanted to ban the use of weapons without human supervision by 2026.

Stop the Killer Robot Movement. Source: Internet

In December 2023, the United Nations passed a key resolution placing the issue of AI weapons on the agenda of the UN General Assembly in September 2024.

Experts say the move offers the first realistic way for countries to take action against AI weapons. But that is easier said than done.

Some believe AI-assisted weapons could be more accurate than human-guided weapons, potentially reducing so-called collateral damage, such as civilian casualties and destruction of residential areas, as well as the number of soldiers killed and maimed, while helping weaker countries and groups to protect themselves.

But the flaw in this thinking is that hostile states would also have such weapons, and militarily more powerful states would likely have more powerful autonomous weapons that would cause more damage, not less.

As for relying on autonomous weapons to reduce collateral damage, this is based on two premises, which are difficult to achieve.

First, AI will not make mistakes, or the probability of AI making mistakes is lower than that of humans. But now no one can guarantee that AI will not make the same mistakes as humans, such as identifying civilians as targets.

Second, the scenarios in which autonomous weapons are used are essentially the same as those of human-controlled weapons (such as rifles, tanks, and Predator drones). This is also not necessarily tenable, because once new weapons appear, the methods of combat and the targets of attack will also change, and their use scenarios are difficult to predict and control.

Therefore, most AI experts believe that stepping on the gas on the road to autonomous weapons could make catastrophic mistakes. Many people are deeply concerned about the ethics behind letting AI choose targets and make decisions on its own.

In 2007, the British military was forced to hastily redesign its own Brimstone missile for use in Afghanistan over concerns it might mistake a bus carrying children for a truckload of insurgents.

That’s because while AI is good at recognizing images, it’s not infallible. We’ve all heard of ridiculous mistakes in image recognition, like confusing a cat for a dog, but in war, confusing a child for a terrorist can have disastrous consequences.

In addition, AI systems may be hacked, take actions that violate instructions, or cause unexpected events such as misjudgment, leading to an escalation of conflict or lowering the threshold of war.

A principle commonly advanced by researchers and the military is that autonomous weapons must consider a “human in the loop.” But where and how humans should or must be involved remains controversial: Should humans visually verify the target before authorizing a strike? Must they be able to cancel a strike if battlefield conditions change, such as if civilians enter the combat zone?

At the same time, AI autonomous weapons also make the accountability mechanism complicated and confusing. Once a problem occurs, it is obviously not the weapon itself that bears the responsibility. Who is responsible? Is it the person who authorized the attack? Or the weapon manufacturer? Or the developer of the AI ​​system?

Given the complexity and destructive power of AI autonomous weapons, many people (probably the vast majority of people on Earth) believe that the best solution is to ban lethal autonomous weapons. But all of these issues need to be recognized, discussed, argued, negotiated, and compromised at the international level (United Nations). Such controversial topics and huge workload mean that it will take several years before we can take the first substantive step.

But civil society can act now. Professional associations in the fields of AI and robotics should develop and enforce codes of conduct that prohibit work on lethal autonomous weapons. There are many precedents in this regard, such as the American Chemical Society’s strict chemical weapons policy and the American Physical Society’s call for the United States to ratify the Comprehensive Nuclear-Test-Ban Treaty and oppose the use of nuclear weapons against non-nuclear states in order to promote peace and security.

In short, AI-driven autonomous weapons are bound to emerge, and whether lethal or not, they will reshape the shape of future warfare.

Humans should not let this "weapon" get out of control, but must face up to the risks and challenges it brings, seize every opportunity, and ensure that we can control its development in a timely manner. Only in this way can it not change from a "Terminator" to a "Human Terminator".

References:

[1] https://www.nature.com/articles/d41586-024-01029-0

[2] https://www.nature.com/articles/d41586-023-00511-5

[3] https://www.nature.com/articles/521415a

[4] https://san.com/cc/experts-warn-world-is-running-out-of-time-to-regulate-ai-in-warfare/

This article is supported by the Science Popularization China Starry Sky Project

Produced by: China Association for Science and Technology Department of Science Popularization

Producer: China Science and Technology Press Co., Ltd., Beijing Zhongke Xinghe Culture Media Co., Ltd.

Special Tips

1. Go to the "Featured Column" at the bottom of the menu of the "Fanpu" WeChat public account to read a series of popular science articles on different topics.

2. Fanpu provides a function to search articles by month. Follow the official account and reply with the four-digit year + month, such as "1903", to get the article index for March 2019, and so on.

Copyright statement: Personal forwarding is welcome. Any form of media or organization is not allowed to reprint or excerpt without authorization. For reprint authorization, please contact the backstage of the "Fanpu" WeChat public account.

<<:  The peak of May Day return traffic is coming! Don’t be anxious on your way home, and keep safety in mind →

>>:  Is it always fun to be on vacation? Not necessarily!

Recommend

I'll tell you at 3 o'clock! Why does your app have no users?

Recently, I have come into contact with some App ...

How does Pinduoduo achieve user growth?

Recently, Pinduoduo has been caught up in a count...

The secret to making Tik Tok video ads a hit!

How to shoot a good Douyin video with strong sale...

Plug in the Earth? Get heat from rocks!

When we observe the Earth from space, we see a be...

How to acquire seed users without money or resources?

When a new product is just starting to operate, I...

Creative formula for marketing promotion, master these 6 methods!

Starting from user-oriented thinking, this articl...