Nowadays, artificial intelligence (AI) is developing rapidly, and AI agents are gradually becoming the research focus in the industry. As an intelligent entity, an AI agent is able to perceive the environment and change it through its own decisions and actions, thereby improving its performance through learning and adaptation capabilities. So, do AI agents that can make independent decisions need to bear legal responsibility? How should this be defined in the law? Recently, a research paper published in the authoritative scientific journal Science gave a possible answer. Daniel Gervais, a law professor at Vanderbilt University in the United States, and John Nay, founder and CEO of Stanford University CodeX Institute and Nomos AI, jointly published a paper stating: AI has developed to the point where it can be a legal subject with rights and obligations. Therefore, before the problem becomes complicated, we need to develop a "cross-species" legal framework to treat AI as a legal subject. Additionally, they highlighted the idea of “zero-member LLCs” and training large language models (LLMs) to supervise AI agents or turn AI systems into law-abiding systems. The related research paper, titled “Artificial intelligence and interspecific law”, has been published in Science. AI controls "zero-member limited liability company" Until now, the legal system has been unitary, allowing only humans to design and use it. In the legal system, non-human legal subjects (such as animals) must realize their rights through humans, and these subjects are just tools used to deal with human interests and obligations. However, incorporating AI into the legal system is less about defining and protecting the rights and responsibilities of these non-human subjects than about addressing the human interests and obligations associated with them. Black's Law Dictionary refers to corporations as "artificial persons." In the United States, the laws of some jurisdictions do not always explicitly require that the top management of a corporation be human. The "zero-member limited liability company" mentioned in the U.S. Uniform Limited Liability Company Act (ULLCA) seems familiar. Although the provision clearly states that the absence of members will lead to the dissolution of the limited liability company, the ULLCA does not enforce this provision. Even if only one U.S. state allowed a zero-member LLC to exist, the entity could still conduct business nationwide under the so-called internal affairs doctrine, which states that courts would rely on the laws of the state of incorporation to create rules governing a legal entity’s internal affairs. If there were such a “zero-member LLC” operated by an AI, what options would the law have? The law cannot easily attach legal consequences to the autonomous actions of the AI (although “autonomy” here may be limited). The law may be able to attribute liability to the humans who started the LLC, but this may require a new legal tool because people are not generally held liable for the actions of legal entities they create or control. Furthermore, it is not excluded that the AI itself may file an application to create a new limited liability company. The courts will still have a range of tools for regulating the liability of legal entities, including compensation, property seizure and dissolution, etc. The courts can issue various orders, such as injunctions, but the AI will decide whether to comply with these orders. In addition, the authors of the paper also proposed another possibility: to make "zero-member limited liability companies" illegal in all jurisdictions, but this would require extensive legislative efforts worldwide and may conflict with promoting the development of the technology industry. Therefore, the emergence of "zero-member limited liability companies" should prompt the legal system to adapt to the realization of autonomous AI agents. Allowing AI to operate in the form of a limited liability company or other legal entity with a clearly defined legal identity will bring two main benefits : first, it establishes AI as a legal entity that can be pursued for damages in legal proceedings, providing a clear legal target for the injured party; second, this provides a clearer research direction for machine learning researchers. Building a legal AI with “conscience” In the current paradigm, there seems to be a path to embed law-following behavior into AI agents driven by LLMs. For example, one could train an LLM that monitors and/or influences the main AI agent and can be used as a reward model. However, it is important to be clear that many situations involving human agents ultimately require humans to make decisions about the legalities of the situation; this does not change with the presence of AI agents. Beyond the normative claims made, it will be crucial to incorporate the advanced AI that will emerge in the coming years into our legal system so that we can monitor its behavior, assign responsibility, take necessary safeguards, and guide AI research towards building a legitimate artificial “conscience.” In the context of the rapid development of digital intelligence, it is essential to focus on proactive and automated crime prevention. However, given the inevitable circumstances when AI systems fail to achieve ideal behavior, the companies involved should be responsible for paying compensation to individuals who can prove to the court that they have suffered damages. To achieve this goal, the scope of commercial liability insurance policies that need to maintain minimum insurance policies can be expanded. The rise of interspecies law is inevitable, but it is difficult to predict where we will end up on this legal spectrum. On the one hand, interspecies law may involve modifying corporate law to accommodate the way corporate entities that are partially controlled by humans operate. On the other hand, interspecies law may also mean adapting the legal system in our daily interactions with autonomous, intelligent entities. Paper link: www.science.org/doi/10.1126/science.adi8678 |
Due to the epidemic, the Geneva Motor Show was ca...
Some people lament that life service platforms su...
For WeChat mini-programs, we have a better unders...
What keywords can I choose? Keywords are used to ...
Today, Apple released a quick security response u...
Scrambled eggs with tomatoes, beef brisket stewed...
The Year of the Rabbit Spring Festival holiday is...
Based on his own practice, the author shares rele...
【Quick reading, article summary】 As a leading API...
"Wan Qing's "A Simple and Practical...
Wang Chenxu Most of us have read “Guangguanjujiu”...
Black Friday has always been considered the bigge...
Mixed Knowledge Specially designed to cure confus...
Many people have seen the scene of "turning ...