Artificial intelligence is getting smarter, should we give AI human rights?

Artificial intelligence is getting smarter, should we give AI human rights?

A few years ago, the topic of legal personhood and legal rights for artificial intelligence might have appeared only in science fiction. But that is only the past.

Do we also need to think about the rights of AI now?

You must fight for your AI rights

It’s already clear that artificial intelligence machines are getting closer and closer to replicating the human brain. In some ways, our current artificial neural networks have more neurons than creatures like bees and cockroaches, and the number is increasing.

The largest projects aim to create more biosynthetic algorithms that aim to replicate the way the human brain works, rather than simply mimicking how we remember. There are also projects that aim to transfer human consciousness into machine form, or projects like the so-called "OpenWorm" project, which aims to construct the human connectome (i.e., the wiring diagram of the central nervous system) for the tiny hermaphroditic nematode worm, the only complete connectome ever created.

In a 2016 survey, 90% of 175 industry experts believed that artificial intelligence could reach human parity by 2075.

Before we get to that point, as AI gradually surpasses animal intelligence, we will have to start thinking about the relationship between the rights of AI and the "rights" we define for animals. Imagine forcing a smart elevator to move up and down, which might be cruel. A few years ago, British technology writer Bill Thompson wrote that people tend to develop AI products without hurting themselves, "which reflects that people always think that AI is for the service of humans, not for independent thinking."

However, the most pressing question we face right now concerns whether AIs have legal rights. Put simply, should we consider granting them some form of legal personhood? This isn’t as absurd as it sounds, nor does it mean that AI has “ascended” to a particular status in our society. But it does reflect the role AI currently plays and will continue to play in our lives.

Smart Tools in the Age of Dual Lawyers

Currently, our legal system largely assumes that we are dealing with a world filled with dumb tools. We may talk about the importance of gun control, but we still convict the person who wounded someone with a gun, not the gun itself. If the gun itself is defective, then we hold the company that made it responsible.

So far, this mindset has been largely applied directly to the world of AI and robotics. In 1984, the owners of an American company called Athlone Industries were taken to court because their batting practice machine was considered too brutal. The case was worth bringing because the judge declared that the lawsuit was against Athlone, not the batting robot, because "robots cannot be sued."

In 2009, a British driver followed his GPS system's instructions to follow a narrow cliff edge, which resulted in him getting stuck and having to be towed back to the main road by police. Although he blamed the problem on technology, a court found him guilty of negligence.

However, there are too many differences between today's (and certainly tomorrow's) AI technology and yesterday's science and technology. Smart devices like self-driving cars or robots are not only used by humans, but also controlled by them - of course, they will then complete the instructions independently. Smart devices are able to use machine learning algorithms to collect and analyze information on their own and then make decisions. This cannot be entirely blamed on the creators of the technology.

As David Vladek, a law professor at Georgetown University in Washington, points out, in the few in-depth case studies in this area, so many individuals and companies were involved in the design, modification and integration of AI-enabled components that it is difficult to determine who is ultimately responsible. When talking about the “black box” of AI, the system is notoriously elusive.

David Vladek has written that “some components may have been designed before the AI ​​project was even conceived, and the designers of the components may never have imagined that their designs would be incorporated into any AI system, let alone one that causes harm.” In such cases, it does not sound fair to attribute liability to a component designer who was not involved in the completion and operation of the AI ​​system. It may be difficult for a court to determine whether the designer of such a component foresaw the harm that occurred.

The role of the company

Giving AI a legal entity status isn’t entirely unprecedented. Corporations have long played that role, which is why they can own assets, or be sued, without necessarily having to do so in the name of the CEO or executive directors.

Florida State University law professor Sean Byrne noted that although it has not been tested, artificial intelligence may already have this status because AI can manage limited liability companies and therefore have legal independence. If proposals like Bill Gates' "robot tax" are to be taken seriously at the legal level, then tax factors also need to be considered.

However, controversy remains. If AI can somehow perform actions for which responsibility is unclear, then its creators may be able to shirk responsibility. Moreover, this may also lead to demotivation among developers of AI tools - because when the AI ​​does not perform as expected, they will have a reason not to develop it further.

There is also currently no way to punish AI, as punishments like imprisonment or the death penalty would be meaningless to an AI.

John Danaher, a law professor at the National University of Ireland, said of AI legal personhood: “I don’t think it’s the right thing to do, at least not right now.” “My guess is that for the foreseeable future, it will be used primarily to shirk human responsibility and cover up antisocial activity.”

However, this field does not rely on any benchmark targeting subjective consciousness.

“Today, corporations have legal rights and are considered legal persons, but most animals do not,” said Yuval Noah Harari, author of Sapiens: A Brief History of Humankind and Homo Deus: A Brief History of Tomorrow. “This is despite the fact that corporations obviously have no consciousness, no personality, and no ability to experience happiness and pain. Animals, on the other hand, are conscious entities. Whether or not AI develops consciousness, there could be economic, political, and legal reasons to give it personality and rights, just as corporations can have legal personality and rights. In fact, AI may one day dominate certain companies, organizations, and even countries. This scenario is rarely mentioned in science fiction, but I think it is more likely to happen than the stories in Westworld and Appleseed.”

No longer science fiction

For now, these topics are still the stuff of science fiction, but as Yuval Noah Harari points out, that may not stay that way for long. Based on real-world uses of AI and their relationship with humans, questions like “who is responsible when AI causes someone’s death” or “can a person marry their AI assistant” will need to be addressed in the future.

John Danaher said, "The decision to give legal personality to any entity can basically be simply divided into two sub-questions. Should this entity be treated as a moral subject and be held accountable for its actions? The other question is, should this entity be treated as a moral object and thus protected from certain interference and infringements of its integrity? My view is that AI should not be treated as a moral subject, at least not yet. But I think that in some cases, they should be treated as moral objects. I think that people can rely on artificial intelligence companions to a great extent, so in many cases, it would be wrong to modify or destroy these entities. This means that we may be responsible for AIs without destroying or infringing on their integrity."

In other words, we shouldn’t allow companies to escape responsibility when it comes to AI tools. As AI systems make their way into the real world in everything from self-driving cars and financial traders to drones and robots, it’s critical that someone is held accountable for what they do.

Likewise, it would be a mistake to view the relationship between AI and humans as being of the same nature as previous, non-intelligent technologies.

It's about a learning curve. And even if we're not yet at the stage where we need to worry about AI sovereignty technically, that doesn't mean it's the wrong question to ask. So when Siri doesn't hear you clearly and asks if it should search the web, stop yelling at it, okay?

From Digital Trends

byLuke Dormehl

From: NetEase Intelligence

<<:  Artificial Intelligence: Driving China's Economic Growth (Chinese version)

>>:  To avoid distracted driving, MIT studies the way and logic of human driving

Recommend

Decrypting the traffic distribution mechanism of the live broadcast room

In previous tweets, the editor shared the importa...

Tech Neo Technology Salon No. 16 - Automated Operation and DevOps

【51CTO.com original article】 [51CTO original arti...

Game live streaming platform: Huya Live Development Analysis

The author of this article analyzes the main func...

A proud buttocks "Ao Family Army Military Academy"

A proud buttocks "Ao Jiajun Military Academy...

Why did the chicks I raised in my childhood die so soon?

If I can see farther than other chickens, it is b...

Flu, parasites, cancer...you heard it right, dinosaurs can get sick too!

The weather is getting colder Everyone must keep ...

Emotional Cycle Theory - Core Trading Course for Hot Money

Emotional Cycle Theory - Introduction to the reso...

How can we get media coverage when our APP is just launched?

Introduction: Mobile Internet entrepreneurs are v...

Watch out for Fit and POLO, the all-new Ford Fiesta will be launched this year

In the automotive industry, Germany, the United S...