A few years ago, the topic of legal personhood and legal rights for artificial intelligence might have appeared only in science fiction. But that is only the past. You must fight for your AI rights The largest projects aim to create more biosynthetic algorithms that aim to replicate the way the human brain works, rather than simply mimicking how we remember. There are also projects that aim to transfer human consciousness into machine form, or projects like the so-called "OpenWorm" project, which aims to construct the human connectome (i.e., the wiring diagram of the central nervous system) for the tiny hermaphroditic nematode worm, the only complete connectome ever created. In a 2016 survey, 90% of 175 industry experts believed that artificial intelligence could reach human parity by 2075. Before we get to that point, as AI gradually surpasses animal intelligence, we will have to start thinking about the relationship between the rights of AI and the "rights" we define for animals. Imagine forcing a smart elevator to move up and down, which might be cruel. A few years ago, British technology writer Bill Thompson wrote that people tend to develop AI products without hurting themselves, "which reflects that people always think that AI is for the service of humans, not for independent thinking." However, the most pressing question we face right now concerns whether AIs have legal rights. Put simply, should we consider granting them some form of legal personhood? This isn’t as absurd as it sounds, nor does it mean that AI has “ascended” to a particular status in our society. But it does reflect the role AI currently plays and will continue to play in our lives. Smart Tools in the Age of Dual Lawyers So far, this mindset has been largely applied directly to the world of AI and robotics. In 1984, the owners of an American company called Athlone Industries were taken to court because their batting practice machine was considered too brutal. The case was worth bringing because the judge declared that the lawsuit was against Athlone, not the batting robot, because "robots cannot be sued." In 2009, a British driver followed his GPS system's instructions to follow a narrow cliff edge, which resulted in him getting stuck and having to be towed back to the main road by police. Although he blamed the problem on technology, a court found him guilty of negligence. However, there are too many differences between today's (and certainly tomorrow's) AI technology and yesterday's science and technology. Smart devices like self-driving cars or robots are not only used by humans, but also controlled by them - of course, they will then complete the instructions independently. Smart devices are able to use machine learning algorithms to collect and analyze information on their own and then make decisions. This cannot be entirely blamed on the creators of the technology. As David Vladek, a law professor at Georgetown University in Washington, points out, in the few in-depth case studies in this area, so many individuals and companies were involved in the design, modification and integration of AI-enabled components that it is difficult to determine who is ultimately responsible. When talking about the “black box” of AI, the system is notoriously elusive. David Vladek has written that “some components may have been designed before the AI project was even conceived, and the designers of the components may never have imagined that their designs would be incorporated into any AI system, let alone one that causes harm.” In such cases, it does not sound fair to attribute liability to a component designer who was not involved in the completion and operation of the AI system. It may be difficult for a court to determine whether the designer of such a component foresaw the harm that occurred. The role of the company Florida State University law professor Sean Byrne noted that although it has not been tested, artificial intelligence may already have this status because AI can manage limited liability companies and therefore have legal independence. If proposals like Bill Gates' "robot tax" are to be taken seriously at the legal level, then tax factors also need to be considered. However, controversy remains. If AI can somehow perform actions for which responsibility is unclear, then its creators may be able to shirk responsibility. Moreover, this may also lead to demotivation among developers of AI tools - because when the AI does not perform as expected, they will have a reason not to develop it further. There is also currently no way to punish AI, as punishments like imprisonment or the death penalty would be meaningless to an AI. John Danaher, a law professor at the National University of Ireland, said of AI legal personhood: “I don’t think it’s the right thing to do, at least not right now.” “My guess is that for the foreseeable future, it will be used primarily to shirk human responsibility and cover up antisocial activity.” However, this field does not rely on any benchmark targeting subjective consciousness. “Today, corporations have legal rights and are considered legal persons, but most animals do not,” said Yuval Noah Harari, author of Sapiens: A Brief History of Humankind and Homo Deus: A Brief History of Tomorrow. “This is despite the fact that corporations obviously have no consciousness, no personality, and no ability to experience happiness and pain. Animals, on the other hand, are conscious entities. Whether or not AI develops consciousness, there could be economic, political, and legal reasons to give it personality and rights, just as corporations can have legal personality and rights. In fact, AI may one day dominate certain companies, organizations, and even countries. This scenario is rarely mentioned in science fiction, but I think it is more likely to happen than the stories in Westworld and Appleseed.” No longer science fiction John Danaher said, "The decision to give legal personality to any entity can basically be simply divided into two sub-questions. Should this entity be treated as a moral subject and be held accountable for its actions? The other question is, should this entity be treated as a moral object and thus protected from certain interference and infringements of its integrity? My view is that AI should not be treated as a moral subject, at least not yet. But I think that in some cases, they should be treated as moral objects. I think that people can rely on artificial intelligence companions to a great extent, so in many cases, it would be wrong to modify or destroy these entities. This means that we may be responsible for AIs without destroying or infringing on their integrity." In other words, we shouldn’t allow companies to escape responsibility when it comes to AI tools. As AI systems make their way into the real world in everything from self-driving cars and financial traders to drones and robots, it’s critical that someone is held accountable for what they do. Likewise, it would be a mistake to view the relationship between AI and humans as being of the same nature as previous, non-intelligent technologies. It's about a learning curve. And even if we're not yet at the stage where we need to worry about AI sovereignty technically, that doesn't mean it's the wrong question to ask. So when Siri doesn't hear you clearly and asks if it should search the web, stop yelling at it, okay? byLuke Dormehl From: NetEase Intelligence |
<<: Artificial Intelligence: Driving China's Economic Growth (Chinese version)
>>: To avoid distracted driving, MIT studies the way and logic of human driving
If you happen to be operating a public account an...
In previous tweets, the editor shared the importa...
【51CTO.com original article】 [51CTO original arti...
The author of this article analyzes the main func...
A proud buttocks "Ao Jiajun Military Academy...
Many people should know the special effects of bu...
On the other side of the North Atlantic Ocean, th...
If I can see farther than other chickens, it is b...
At the 2014 WWDC conference, Haier, a Chinese bra...
The weather is getting colder Everyone must keep ...
Emotional Cycle Theory - Introduction to the reso...
The latest report from Indian media said that Ind...
1. In the first quarter of 2021, new car sales in...
Introduction: Mobile Internet entrepreneurs are v...
In the automotive industry, Germany, the United S...