“Many of the problems facing humanity are not simply technical problems, but require us to coordinate socially and economically for the greater good.” “For AI technology to be helpful, it needs to learn directly from human values.” — Raphael Koster, Research Scientist at DeepMind Can artificial intelligence (AI) push human society into a truly intelligent era? Although the artificial intelligence industry has made breakthrough progress after more than 60 years of development and has been widely used in all aspects of the economy and society, building an artificial intelligence system that is consistent with human values remains an unresolved problem. Now, a new study from the British artificial intelligence company DeepMind may provide a new idea for practitioners in the artificial intelligence industry to solve this problem. It is reported that DeepMind's artificial intelligence system not only learned to formulate policies on how to reallocate public funds in a four-player online economic game by learning from more than 4,000 people and in computer simulations, but also performed very well and defeated other human players. The game involves players deciding whether to keep a monetary donation or share it with others for the collective good. The related research paper, titled "Human-centred mechanism design with Democratic AI," was published online on July 5 in the authoritative scientific journal Nature Human Behaviour. (Source: Nature Human Behaviour) Annette Zimmermann, assistant professor at the University of York in the UK, warned against “a narrow equating of democracy with a ‘preference satisfaction’ system that seeks out the most popular policies.” She also said democracy is more than getting your favorite policies implemented best — it’s about creating a process in which citizens can engage and deliberate with each other as equals. Economic mechanisms designed by AI The ultimate goal of AI research is to build technology that benefits humanity—from helping us with everyday tasks to solving major existential challenges facing society. Machine learning systems are already solving major problems in biomedicine and helping humanity address environmental challenges. However, the application of AI to helping humans design fair and prosperous societies is still underdeveloped. In economics and game theory, the field known as mechanism design studies how to optimally control the flow of wealth, information, or power among incentivized actors to achieve desired goals. In this work, the research team sought to demonstrate that deep reinforcement learning (RL) agents can be used to design an economic mechanism that captures the preferences of the incentivized population. In this game, players start with different amounts of money and must decide how much to contribute to help better develop a common fund pool, and eventually receive a portion in return. This involves repeated decisions about whether to keep a monetary donation or share it with other players for potential collective benefits. The research team trained a deep reinforcement learning agent to design a redistribution mechanism that shares funds among players in situations of wealth equality and inequality. Shared revenue is returned to players through two different redistribution mechanisms, one designed by the AI system and the other by humans. Figure|Game design (Source: Nature Human Behaviour) In policies formulated by artificial intelligence, the system will redistribute public funds based on the amount of starting funds contributed by each player, thereby reducing the wealth gap between players. This policy won more votes from human players than an “egalitarian” approach (which splits the money equally regardless of how much each player contributed) and a “libertarian” approach (which splits the money based on the proportion of the public treasury each player contributed). At the same time, the policy also corrects the initial wealth imbalance and stops players from "free riding", as players will receive almost nothing in return unless they contribute about half of their starting capital. However, the research team also warned that their research results do not represent a recipe for "AI government", nor do they intend to build some AI-driven tools specifically for policy making. Is it trustworthy? The findings suggest that AI systems can be trained to meet democratic goals by designing a mechanism in an incentive-compatible economic game that humans clearly prefer. In this work, the research team used artificial intelligence techniques to learn a redistribution scheme from scratch, an approach that relieves AI researchers—who themselves may be biased or unrepresentative of the broader population—of the burden of choosing a domain-specific goal to optimize. This work also raises several questions, some of which are theoretically challenging. For example, one might ask whether it is a good idea to emphasize democratic goals as a method of value calibration. The AI system may inherit a tendency from other democratic approaches to “empower the majority at the expense of the minority.” This is particularly important given the pressing concerns that AI may be deployed in ways that could exacerbate existing biases, discrimination, or inequities in society. (Source: Pixabay) Another open question is whether people will trust the mechanisms designed by AI systems. If the identity of the referee is known in advance, players may prefer human referees to AI agents. However, when people believe that the task is too complex for humans, they tend to choose to trust the AI system. Furthermore, would players react differently if the mechanics were explained to them verbally rather than learned through experience? A large literature suggests that people sometimes behave differently when mechanics are explained “by description” rather than “by experience”, particularly for risky choices. However, mechanics designed by AI may not always be expressible in words, and it seems likely that the observed behavior in such cases depends entirely on the choice of description used by the research team. At the end of the paper, the research team also emphasized that this research result supports some form of "AI governance", in which autonomous agents make policy decisions without human intervention. They hope that further development of the method will provide tools that help solve real-world problems in a truly human way. Reference Links: https://www.nature.com/articles/s41562-022-01383-x https://www.deepmind.com/publications/human-centred-mechanism-design-with-democratic-ai https://www.newscientist.com/article/2327107-deepminds-ai-develops-popular-policy-for-distributing-public-money/ |
<<: This "flip-flop" saved Zhou Guanyu from the hands of death
>>: Neolithic "Meteor Hammer" Discovered!
Quantum computers have the potential to revolutio...
Global AI Venture Capital In 2024, AI venture cap...
There is such a story: Once upon a time, a fierce...
Produced by: Science Popularization China Author:...
We also need to arrange a good-looking layout How...
The current TV drama ratings remain sluggish, whi...
According to the online version of Fortune magazi...
While many new electric models are attracting con...
Many people's lives I've been a bit hecti...
【Today’s cover】 On January 1, the lake area of ...
Facing the unknown universe, Human beings are alw...
On December 30, Kingsoft Cloud and Xiongmai Techn...
On the morning of September 24, the Henan Provinc...
Dark energy and dark matter are called "two ...
Air fryer As a new kitchen "artifact" H...