A long report on artificial intelligence: 15,000 words to tell you where the problems with artificial intelligence lie

A long report on artificial intelligence: 15,000 words to tell you where the problems with artificial intelligence lie

Preface

Artificial Intelligence (AI) is a collection of technologies, including machine learning, reasoning, perception, and natural language processing. The concept and application of AI began 65 years ago, but the recent progress and application of AI have made this technology a hot topic again. As AI is more widely applied to all aspects of human social and economic life, new opportunities and challenges arise. Its huge potential impact forces humans to think carefully about the development and application of AI technology.

The "AI Now" seminar held in July this year was the last part of a series of studies jointly promoted by the White House Office of Science and Technology Policy and the National Economic Council of the United States. A series of previous studies have analyzed and studied AI from different perspectives, from policies and regulations to the security control of AI, to AI public welfare and how to tap the potential of AI. This time, "AI Now" mainly discussed the impact of AI in the social and economic fields in the next decade. Experts and scholars from various fields around the world gathered together to express their views. Issues discussed include: What problems have been caused by the rapid development of AI at this stage? How to better understand and use AI to create a more fair and just future?

There are countless social and economic issues. This "AI Now" mainly discusses "medical care", "labor employment", "AI fairness" and "AI ethics" principles.

The reason why we chose "medical care" and "labor employment" as the main topics is that AI has been widely infiltrated in these two fields, and the problems that AI can bring are more prominent and obvious in these two fields. "AI fairness" and "AI ethics" are issues that everyone will pay attention to in the future: Will AI contribute to world peace or aggravate social injustice? And how to ensure that the benefits of AI are enjoyed by all mankind?

The purpose of this seminar is to enable AI to better benefit human society. By bringing together many experts and scholars to discuss, this "AI Now" seminar has significant significance both inside and outside the artificial intelligence academic community.

Questions and suggestions

The seminar predicted the possible situations that AI may cause in the future and gave corresponding suggestions. It should be stated that the following suggestions integrate the wisdom of all participants and do not represent the position of an individual or organization.

As AI is increasingly applied to all aspects of social and economic life, the following questions and corresponding suggestions can serve as a reference guide for investors and practitioners in related fields.

1. Problem: The development and application of AI depends on specific infrastructure and human and material resources. The shortage of these basic resources will undoubtedly limit the development of AI, and the mastery of these infrastructure and resources becomes crucial in the early stages of AI development.

Recommendation: Improve the resource base for developing AI from multiple channels. Focus on the construction of supporting fields such as data sets, computers, and education and training of relevant talents.

2. Problem: Although AI is still at an early stage, it already exists as a human assistant in many fields and has an impact on labor relations. Jason Furman, chairman of Obama's Council of Economic Advisers, said that low-skilled manual labor is the most likely position to be replaced by AI and automated machinery. If robots begin to compete with humans for jobs, the allocation of human resources will also usher in changes.

Recommendation: Update your thinking and skills to cope with the changes in employment structure brought about by the participation of AI. In the future, AI machines will take on the vast majority of low-skilled jobs, and people need to adjust their skill reserves and income and expenditure directions to cope with the new situation.

3. Problem: AI and automation processes are usually carried out behind the scenes, out of sight of people. Without human involvement, machines may make unfair or inappropriate decisions. As AI applications grow further, AI judgment and correction will become more important and more difficult.

Recommendation: Support AI calibration and error correction research, and AI error hazard assessment procedures should also be put on the agenda. These studies should be developed in conjunction with the rapid development of AI, just like the judicial system in the human system to the administrative system. In this way, AI mistakes can be discovered in time and serious consequences can be avoided.

4. Problem: Research on fairness and accountability of public and private institutions in the AI ​​model seems to conflict with some current US laws, such as the Computer Fraud and Abuse Act (CFAA) and the Digital Millennium Copyright Act (DMCA).

Recommendation: To be clear, neither the Computer Fraud and Abuse Act nor the Digital Millennium Copyright Act restricts research.

5. Question: Although AI is being used in many fields such as medicine and labor at a rapid pace, there is currently no recognized way for humans to evaluate the impact of AI .

Recommendation: Support research on AI impact assessment systems. Research in this area should be conducted in collaboration with government agencies so that the results can be used by government administration.

6. Problem: The voices of those whose rights and interests are harmed by the deployment of AI are often ignored .

Recommendation: When building AI systems, the opinions of those affected should be heard. AI should be co-designed by all parties to avoid being unfair or too radical.

7. Problem: AI research is mainly focused on electronic technology, and often pays insufficient attention to human issues. In the future, members of the computer science field will become increasingly homogeneous and single-minded, which is not conducive to the vision and experience of AI developers, and thus affects the creation of AI products.

Suggestion: AI researchers and developers should be as diverse as possible. The diversity and pluralism of developers will also bring more abundant AI products. In the future, the AI ​​field should support more interdisciplinary research, so that AI systems can integrate electronic computing, social sciences, and humanities.

8. Problem: Existing ethical standards are no longer able to cope with the complexity of the problems AI faces in reality (such as in healthcare, law enforcement, criminal sentencing, and labor services). At the same time, in computer science classes at universities, although these science and engineering courses have gradually begun to attach importance to ethical education, they have not been thoroughly implemented in practice.

Recommendation: Collaborate with professional organizations such as the Association for Artificial Intelligence (AAAI), the Association for Computing Machinery (ACM), and the Institute of Electrical and Electronics Engineers (IEEE) to promote the development of ethical standards that can face new situations. At the same time, implement the education of these new ethical standards in school classrooms. Every student who is interested in computer science should also receive moral education such as civil rights and freedoms in addition to professional courses. Correspondingly, practitioners in fields where AI has infiltrated (such as medical settings) should also be aware of these new ethical standards.

Four key questions about artificial intelligence today

We will now take a closer look at four key issues currently being addressed in AI, providing readers with an opportunity to gain insights and advice from industry experts on the challenges, opportunities, and possible interventions facing each of these key issues.

  1. Social Injustice

How do artificial intelligence systems cause social injustices such as bias and discrimination?

Artificial intelligence systems are increasingly playing a role in high-stakes decision-making, from credit and insurance to third-party decision-making and parole issues. AI technology will replace humans in deciding who gets important opportunities and who is left behind, raising a series of questions about rights, freedoms, and social justice.

Some people believe that the application of artificial intelligence systems can help overcome a series of problems caused by human subjective biases, while others worry that artificial intelligence systems will amplify these biases and further widen the inequality of opportunities.

In this discussion, data will play a crucial role and arouse strong attention. The operation of artificial intelligence systems often depends on the data they obtain and is also a direct reflection of these data. This also includes the source of these data and the deviations in the collection process. From this perspective, the impact of artificial intelligence is closely related to the corresponding big data technology.

Broadly speaking, data bias comes in two forms.

The first is that the collected data objectively cannot accurately reflect the actual situation (mainly due to inaccurate measurement methods; incomplete or overly one-sided data collection; non-standardized self-evaluation and other defects in the data collection process).

The second is that there is a subjective structural bias in the data collection process (such as purposefully predicting career success rates by subjectively favoring boys over girls when collecting career data).

The former type of data bias can be solved by "purifying data" or improving the data collection process. However, the latter type requires complex manual intervention measures. It is worth noting that although many organizations have done a lot of work to solve this problem, there is still no consensus on how to "detect" data bias.

When the collected data has the above-mentioned deviations, the AI ​​system trained with such data will also have corresponding deviations, and the models or results produced will inevitably replicate and amplify such deviations. In this case, the decisions made by the AI ​​system will have differential effects, thus causing social injustice. This kind of unfairness is much more subtle than human bias and injustice.

In risk-based industries, the widespread use of artificial intelligence systems has led to a significant increase in the phenomenon of alienation between people based on subtle differences, especially in the insurance and other social security industries. The application of artificial intelligence systems can enable companies to more effectively identify specific groups and individuals through "adverse selection" to effectively avoid risks.

For example, in the field of medical insurance, artificial intelligence systems will analyze the characteristics and performance behaviors of policyholders and charge more premiums to those who are identified as having special diseases or high future morbidity rates. In this case, it is particularly disadvantageous for those with poor health and poor financial ability. This is why critics often accuse that even if the predictions of artificial intelligence systems are accurate and the behavior of insurers is rational, the results are often negative.

Competition in the insurance industry may exacerbate this trend, and ultimately the use of AI systems may exacerbate this inequality. Of course, the normative principles in relevant anti-discrimination laws and regulations can help address these issues, although this approach may not be the most effective or fair. In addition, the design and deployment of AI systems is also important, but the existing legal framework may hinder relevant research. For example, the Computer Fraud and Abuse Act (CFAA) and the Digital Millennium Copyright Act (DMCA) have restricted this research, so there is a need to reform the current regulations to ensure that the necessary research can proceed smoothly.

Will artificial intelligence benefit a minority of people?

Artificial intelligence systems bring new ways to generate economic value and also have new impacts on the distribution of economic value. To some extent, the value distribution of artificial intelligence systems will benefit some groups, thereby continuing or exacerbating existing wage, income and wealth distribution gaps.

Organizations that are able to develop artificial intelligence technology will exacerbate this inequality. It is predicted that artificial intelligence is a huge industry with a market value of billions of dollars per year. The development of artificial intelligence technology requires a lot of upfront investment, including massive computing resources and big data, both of which are very expensive. This has led to the development and application of artificial intelligence being limited to a specific range. In this case, only those companies with strong data and computing power can gain more advantages through artificial intelligence systems to gain a deeper understanding of market dynamics, thereby bringing themselves the Matthew effect of "the rich get richer" and more success.

On the other hand, AI and automation systems can reduce the cost of goods and services. If these cost reductions can benefit consumers, then AI can narrow the gap between the rich and the poor. In this case, AI systems can improve the living standards of the entire society and even trigger a gradual redistribution effect.

In addition, artificial intelligence will also bring about a completely new way of life. In an artificial intelligence environment, those who are outdated in their jobs will have the opportunity to demand new ways of obtaining resources, and those whose jobs are affected will be able to create new job opportunities through artificial intelligence. In other words, artificial intelligence can alleviate the labor crisis and allow people to freely pursue new ways of living and working, thereby improving the overall welfare of society.

Nevertheless, some critics point out that AI systems will make the skills of some workers redundant, and those workers replaced by automation will have to seek new employment opportunities. Even if these workers can find new jobs, such jobs are often low value-added and have lower job security. From this perspective, AI and automation systems actually eliminate jobs.

Furthermore, if learning new job skills is very expensive, workers may decide that such vocational training is not proportional to the new job. In this case, AI systems will not only increase social inequality, but also lead to permanent unemployment and poverty. This is why understanding the potential impact of AI systems on the labor force is an important aspect of understanding their impact on economic equality.

Like many technologies in the past, AI often reflects the values ​​of its creators, so equality in AI can also be promoted through diversity in the development, deployment, and maintenance of AI.

Currently, women and ethnic minorities account for a small proportion of AI and even the entire computer scientist technology industry. This situation has also led to a lack of inclusiveness in the entire technology, resulting in certain biases, and continuing or limiting the consideration of other groups by relevant practitioners.

It is also increasingly clear that diversity in the AI ​​field helps AI systems meet the interests of different groups of people. To address bias, discrimination, and inequality, AI teams need a broader perspective.

2. Labor relations

While current discussions about jobs and AI systems often focus on concerns about a future without jobs, new research suggests there are more complex and immediate issues that affect not only the labor market but also the relationship between employers and employees, power dynamics, professional responsibilities, and the role of work in human life.

Many traditional economic researchers are closely tracking the U.S. labor market and corporate institutions to consider the impact of AI systems. Such research can provide very important qualitative data to promote understanding of macroeconomic trends and labor supply and demand, such as how many jobs will be available in the future.

At the same time, social science research assesses how changes in the nature of work and work dynamics are changing people’s daily experiences. Both of these research perspectives are essential for measuring the social and economic impacts of AI systems on the workforce in the short term.

Will AI affect job demand?

The role of automation technology in the economy is far from a new issue; in fact, consideration of the impact of AI systems has been a long-standing debate.

Although it would appear that labor demand will decline as automation becomes more prevalent, since there will be limited work to do, some economists disagree, calling the idea the "labor synthesis" fallacy. They point out that as productivity in one industry increases (due to automation or other factors), new industries are created, creating new labor demand. For example, agriculture accounted for 41% of the U.S. labor force in 1900, but only 2% in 2000. Two labor economists, David Autor and David Dorn, say that even with this dramatic change, unemployment has not risen in the long run, and the employment-to-population ratio has actually increased. Two other economists, James Huntington and Carl Frey, have made dire predictions: AI systems will significantly reduce jobs.

There is also debate about whether changes and fluctuations in the labor market are related to technological progress or whether they are simply due to economic policy. This view focuses on the role that existing legal systems and regulatory mechanisms should play in the development of AI and automated systems. For example, Robert Gordon believes that the current wave of innovation is not as transformative as it seems. Many people who hold the opposite view say that the labor market is undergoing important changes due to technological changes. These people include Joseph Stiglitz and Larry Mishel, who believe that in order to protect the labor force, it is necessary to pay close attention to regulatory and other policy changes related to AI and automated systems.

Economists such as Autor and Dorn have found that the phenomenon of "employment polarization" is becoming very obvious, that is, the number of medium-skilled jobs is decreasing, while the number of high-skilled and low-skilled jobs is increasing. Although new jobs may appear in the future, they are often low-income and unpopular.

For example, many of the jobs that support AI systems are actually done by humans who maintain the infrastructure and take care of the “health” of the systems. This labor force is often not visible, at least in terms of media coverage and people’s impressions of AI. Therefore, it is often underestimated. This type of work includes janitors who clean offices and do maintenance work, repair workers who fix server failures, and what one reporter called “data hygienists” (who can “clean” data and prepare it for data analysis).

Questions about the impact of AI systems on the workforce should include not only whether new jobs will be created in the future, but also whether those jobs will be decent jobs that can sustain a living.

Furthermore, discussions about AI systems and the future of the labor market often focus on what are traditionally considered low-income, working-class jobs, such as manufacturing, trucking, retail, or service work, but research suggests that a wide range of industries will be affected, including professional jobs that require specialized training or advanced degrees, such as radiology or law. In this regard, new questions about professional responsibilities and obligations will need to be addressed in the future.

How will AI affect the employer-employee relationship?

In recent years, researchers have begun to study how AI and automated systems that rely on big data — from Uber to automated dispatch software used by large retailers to workplace surveillance — are changing the relationship between employers and employees.

The study found that while such systems can be used to empower workers, the technology can also cause big problems, such as depriving workers of their rights, exacerbating employment discrimination, and fostering unfair labor practices.

For example, AI-driven workforce management and scheduling systems are increasingly being used to manage the workforce, fueling the growth of the on-demand economy and the rise of the precariat. While some researchers say that proper scheduling can provide valuable flexibility, more studies so far have found emotional stress and insecurity among employees who are subject to such systems.

The negative experiences of workers managed by these systems include chronic underemployment, financial instability, lack of benefits traditionally available to full-time employees, and the inability to plan for family or self-care (or finding other work because the often-on-call nature of these jobs is unbearable). In addition, workers who are more likely to be affected by these issues are women and minorities.

In addition, new remote management models based on AI systems will make it more difficult to hold employers accountable for decisions made by the “system” that seriously affect employees. As a result, employees will be more vulnerable to exploitation.

For example, big data and AI-driven platforms like Uber remotely control routing, pricing, compensation, and even standards for interpersonal communication — decisions that have traditionally been managed by humans.

In addition to obscuring the nature and logic of specific decisions, this type of remote management is not usually considered "employee management."

Because these new management models don’t fit neatly into existing regulatory models, companies like Uber are positioning themselves as technology companies rather than managers of employees. Under this philosophy, these companies see themselves as platforms that facilitate connections and therefore are not responsible to employees in the same way that traditional employers are. Under this model, employees ultimately bear the risks of employment that comes with benefits (such as reduced tax burdens, health care, and other labor protections) and potential relief models.

3. Healthcare

Most of the AI ​​systems we have seen applied in the field of healthcare rely on large databases. These AI systems automatically extract various important information from the massive data they collect through various complex statistical models and machine learning techniques.

The (and growing) sources of healthcare data already in use—including electronic health records (EHRs), clinical and healthcare databases, and health data uploaded from consumer electronics and apps—are already being used in large numbers in AI systems, which have the potential to improve healthcare.

Whether it is clinical diagnosis, patient care, or medication; whether it is drug production, organizational management, or medical insurance information interaction, these AI systems have greatly helped the work of medical practitioners.

How is AI integrated into medical research and healthcare?

Integrating AI systems into medical research has extremely exciting application prospects. It can help us better understand the pathology of those diseases, help us develop more new treatments, achieve more accurate medical diagnosis, and even produce special medicines for individuals on a customized basis.

However, given the current limitations and prejudices in applying AI to the medical field, they may hinder the realization of these application prospects, which requires researchers to explore this cutting-edge technology more carefully.

At present, the limitations of applying AI technology to the medical field include incomplete or inaccurate research data, that is, not covering specific minority groups. In addition, the complex medical subsidy incentive system, especially the US medical insurance system, will also hinder the application of AI medical technology to a certain extent. For example, some current medical subsidy systems will support the development of individual types of drugs, or tend to subsidize individual treatment plans.

Medical research data often exhibits objective and universal properties, but in actual application, these research conclusions often appear one-sided, temporary, and targeted only at certain groups or diseases. The models analyzed and established by AI systems based on these "one-sided" data may lead to, establish, or derive some erroneous guesses.

Fortunately, such errors can be avoided. If the data collected by an AI system does not have the flaws mentioned above (assuming this can be guaranteed), or if the data framework used by the AI ​​system has the ability to correct these problems by itself, such as randomized controlled trials (RCTs) or other public medical databases, it can effectively avoid major errors.

Assuming that these errors are small enough to be ignored, one of the most likely application prospects of integrating AI systems into medical and health research and clinical practice centers is to allow AI to assist doctors in diagnosing diseases and discover regular patterns from massive data, thereby helping doctors to find those "cunning" lesions hidden deep in the body earlier.

In fact, AI systems can already diagnose some diseases, including leukemia. In the examination and clinical care process, AI systems can also reduce or even prevent misdiagnosis in some cases. It should be noted that misdiagnosis can be fatal, and the value of AI-assisted diagnosis technology is high.

In this regard, AI systems are playing an increasingly important role in diagnosing and identifying diseases. However, this is also why researchers must be careful to avoid situations where AI makes incorrect guesses and infers descriptions of health conditions such as "normal" or "average."

Similarly, we only need to look back to the history of the United States before 1973 to imagine what tragedy could happen when AI misdiagnoses. At that time, the American Psychiatric Association listed homosexuality as a mental illness in its authoritative Diagnostic and Statistical Manual of Mental Disorders, making the tragedy inevitable.

Similarly, when AI systems are applied directly to patient care, they will be involved in all aspects of diagnosis and clinical management, which will often separate caregivers from patients, so it is important to appropriately clarify the limitations of AI "expertise".

Before taking up their posts, a human surgeon will first go to medical university. Only after passing rigorous assessments can their medical skills be recognized by the world. However, how can we create an outstanding AI doctor to assist, or even replace, a famous human doctor with a "diploma"?

Such an AI medical system means that it needs to have an absolutely accurate expert level of authority, without misdiagnosis or diagnostic bias. This level of trust means that these AI systems can be subject to less scrutiny, both in terms of factory capability assessment and in terms of detecting their limits, which will also create new ethical issues that are not currently covered by the medical ethics framework.

In addition, we also need to pay attention to where such AI medical systems are deployed in the field of medical insurance, who benefits from them, etc. Although it is indeed a need to make medical care accessible to everyone and affordable to everyone, there is a lot of evidence that access to medical insurance and health data is not distributed fairly, and in most cases, the poor, non-whites, and women are often at a disadvantage.

Rather than eradicating these systemic inequities, incorporating AI systems into the healthcare system could actually amplify them. While AI systems can tailor appropriate care to benefit a wide range of people, they can also be deliberately trained to filter out outlying groups that are often overlooked and underserved.

If these groups are not properly considered, this in turn will affect the predictive models built by AI systems. AI predictive models will be constantly solidified by the health data uploaded by the privileged groups that have access to such AI systems, thus effectively only reflecting the "health" of the rich, and ultimately building a holistic cognitive model of health and disease that completely excludes "marginalized groups."

Given the current chaos in U.S. health care finances, such concerns are indeed worthy of more attention. Just as such chaos has affected the integration of medical technology in the past, it will inevitably affect the layout and effectiveness of AI medical systems in the future.

Based on such considerations, while people are promoting the continuous development of AI medical systems, they are also constantly working to reduce the cost of AI medical systems. This will also prompt stakeholders (such as politicians, insurance companies, health institutions, pharmaceutical companies, employers and others) to place their bets on large-scale health data collection and the development of AI systems, so as to help them better protect their economic interests in model development and medical care.

However, the critical training, resources, and ongoing maintenance required to integrate these information technology and AI systems into hospitals and other healthcare systems are not always supported or available, which has led to an uneven distribution of technical resources and technical capabilities.

How will the data collection and patient observations required to train AI affect personal privacy?

The AI ​​system's extreme dependence on data volume and the need for case observation naturally give rise to urgent issues such as patient privacy, confidentiality and security protection.

At present, the realization of high performance expectations for AI medical systems depends on the continuous acquisition of massive amounts of patient data through a variety of devices, platforms and the Internet. This process will inevitably involve some people or organizations engaging in extraordinary surveillance behavior driven by interests.

At the same time, technologies such as homomorphic encryption, differential privacy, and stochastic privacy bring us new hope for dealing with these chaotic phenomena, which can allow AI systems to directly "call" data without "looking up" it. Although these new technologies are still in the initial stage of research and development, and even a general application has not been developed, it shows encouraging application prospects.

In addition, with the recent promotion of evidence-based medicine by the US government and the change of the Affordable Care Act from fee-for-service to fee-for-treatment, the economic interests involved in regulatory behavior and the consumption of sensitive health data are becoming increasingly serious.

As for insurance companies, with the introduction of AI systems, the pressure they face to verify the rationality of cross-subsidy schemes is also increasing.

For example, despite the US government’s enactment of the Genetic Information Nondiscrimination Act in 2008, insurance companies are increasingly interested in accessing genetic risk information for the purpose of tiered insurance management. In fact, differential pricing has become a common practice among data analytics providers, which in turn further consolidates and exacerbates existing inequalities.

In addition, “smart devices” and other connected sensors that allow AI systems to obtain the data they need have made tracking and surveillance ubiquitous, which is constantly expanding the scope of current privacy protection policies, such as the Health Insurance Portability and Accountability Act, which was created with this in mind.

As AI systems are increasingly integrated into health and consumer electronics products, the risk of patients being redefined based on segmented data, or having their identities, diseases, and other health information predicted by proxy data, is increasing.

In addition, the software that powers these data collection devices is often privately held rather than open source (and not subject to external audit). Although the US government recently signed a law that exempts the Digital Millennium Copyright Act from reviewing code from outside medical facilities, perhaps more importantly, it is not covered by the exemption.

In general, industry experts have warned of the significant security risks of deploying networking technology on IoT devices, many of which are specifically aimed at medical device security issues.

How will AI impact patients and health insurance providers?

AI technologies that have been realized or are expected to be realized have far-reaching implications for the construction of health care systems and are also of great significance to patients who need care or those with weak physical conditions.

There are many beautiful ideas about AI systems, which are placed on their role as mediators of care work and believe that they may completely replace the work of caregivers in the future. This transformation is optimistic, economical, and efficient, and is likely to improve the relationship between patients and doctors or other caregivers and the way they work.

There are many examples that demonstrate the potential of AI systems to replace or assist human care work, including robot surgeons, virtual housekeepers, and accompanying robots. These examples have gradually sparked some debates, such as whether the social significance of proxy care and accompanying care can be replaced by non-human machines? When machines replace humans, can they not only enhance the professionalism of humans, but also truly be independent? When we think that a machine has the ability to "care" for patients, what "care" capabilities does it have? How do we define the word "care"? Are these "thoughts" that we think of considered from the perspective of patient rights?

At present, although companion robots have not achieved any obvious results in replacing human nursing work, the prospect of AI-driven apps and connected devices allowing patients to regain control of their own health management is increasing day by day, which also indicates that the direct interaction between AI medical systems and patients is in the early stages of development.

This direct interaction between people and AI is actually a double-edged sword. On the one hand, it can help patients recover faster and have a better understanding of their own condition. On the other hand, this change also requires them to bear more risks. These risks include that it may mislead patients and affect the quality and accuracy of the information they may receive, which is also the concern that the Federal Trade Commission (FTC) has to convey to us in recent years.

In addition, these AI-equipped APPs can also transfer the responsibilities that medical practitioners need to bear to the patient themselves, but this may not be good news for patients, because not everyone has the time, financial resources and AI technology acquisition channels to achieve care for their own health.

So, what kind of patients can give priority to the benefits of these still improving AI medical technology? For patients with poor "equipment" but want to manage and maintain their personal data, are the health care they receive unqualified?

Furthermore, what new roles do designers and developers equipped with AI technology need to play in this social evolution process? What new responsibilities do they need to bear?

How will those medical ethics that have always been at the forefront of the storm be integrated into these unique and new engineering technologies?

4. Moral Responsibility

The deployment of AI systems will not only raise new responsibilities, but will also pose challenges to existing areas such as professional ethics, research ethics, and even public safety assessments.

Recently, discussions of moral and AI systems tend to prioritize AI systems that may emerge in a long time, such as the advent of the "singularity", or the development of super intelligence.

That is, this discussion often does not focus on the moral impact of AI systems in the short or medium term, for example, the large number of task-based AI systems currently in use raise new challenges, may exacerbate inequality, or fundamentally change the power mechanism.

The ability to perform a variety of activities on contemporary AI systems, which can cause implicit and explicit consequences, and thus may present new challenges to traditional ethical frameworks. AI systems may cause unpredictable interactions and consequences when deployed in human society.

In terms of resource allocation and the potential to centralize or reorganize power and information, there are critical issues that need to be addressed to ensure that AI technology does not cause harm, especially for marginalized groups.

How do we grant power to AI or delegate AI to make decisions?

The integration of AI systems in the social and economic fields requires us to transform social problems into technical problems that can be solved by AI. This transformation cannot guarantee that AI systems will produce fewer mistakes than existing systems it will replace. Ryan Calo pointed out that people usually think that AI systems (such as autonomous cars) will make fewer mistakes than humans. In fact, this is not the case. AI systems with low complexity will inevitably make new mistakes that humans will not make.

In many areas, ethical frameworks often require the generation of records, such as medical records, lawyers’ case files, or documents submitted by researchers to institutional review committees. In addition, remedial mechanisms are established for patients, clients, or those who feel they have been treated unfairly.

Contemporary AI systems often fail to provide such recording or remediation mechanisms, either because they are technically unavailable or because designers do not consider such recording or mechanisms.

This means that the affected specific groups or individuals often cannot test or question decisions about AI or other prediction systems. This can worsen various forms of power asymmetry. Power asymmetry is an important ethical issue.

When affected individuals are unable to test, challenge or appeal such automated decisions, they are in a relatively lack of power.

The risk that AI systems will not only weaken the power of questioning the vulnerable, but will also give designers more power to define moral behavior. This power can be presented in very subtle forms. For example, various automation systems are often used to influence or “fine-tune certain individuals in a certain direction, and the one who plays a largely decisive or dominant role is the party who designs and deploys such systems and profits from them.

If you want to build an AI system from scratch to achieve goals such as correcting the above imbalances, this itself will be limited by the strength gap. Building and maintaining an AI system requires a large amount of computing resources and a large amount of data. Companies with massive data and computing resources have more strategic advantages than companies that lack such resources.

How do we deal with ethical issues related to AI in various existing industries?

As AI systems become more and more integrated into different industry environments (such as medicine, law, and finance), we will also face new moral dilemmas across different industries.

For example, the application of AI systems in health care environments will challenge the core values ​​upheld in the ethical code of healthcare professionals (e.g., involving confidentiality, continuity of care, avoidance of conflicts of interest, and right to know).

As different stakeholders in the healthcare industry launch a wide variety of AI products and services. Challenges to these core values ​​may be presented in completely new and unexpected ways.

When a doctor uses AI diagnostic equipment when he is trained using drug trial data from a pharmaceutical company that is a vested interest in a drug prescription, how should the doctor comply with his oath to avoid conflicts of interest?

Although this is a hypothetical situation, it points out the difficult issues that must be addressed in the process of revising and updating the code of work ethics.

Similarly, it is necessary for professional associations responsible for managing AI research and development and maintenance to consider taking corresponding measures. For example, the American Association of Artificial Intelligence (AAAI) should develop relevant ethical codes, while the American Computer Association (ACM) and the Institute of Electrical and Electronics Engineers (IEEE) need to carefully revise relevant ethical codes. The existing ethical codes of ACM and IEEE have more than 20 years of history, and needless to say that these guidelines cannot solve core issues related to human institutions, privacy and security, but also prevent the possible harms of AI and other automated decision-making systems. This is becoming increasingly important as AI technology is further integrated into important social fields.

Although more higher education institutions have begun to emphasize the importance of professional ethics in teaching technology and science majors, this effort is still in its early stages and there is still room for further expansion. Knowledge in areas such as civil rights, civil liberties and moral practice has not yet become the scope of requirements that students must master when they graduate. In addition, one thing is worth noting that if someone violates the moral norms of the medical community, the punishment he needs to bear includes losing his power to practice medicine, which does not apply to computer science or many other related fields.

It is not clear whether most computer scientists are familiar with the core contents of the ACM or IEEE guidelines. We also don’t know whether employers will choose not to comply with such non-binding regulations because of other encouragement or stress factors. Therefore, from a practical point of view, in addition to merely rewriting and updating the ethical framework, it is necessary to focus on a wider range of incentive mechanisms and ensure compliance with ethical standards is not something that will be remembered afterwards, but a core issue that needs to be paid attention to in the relevant professional fields, and an integral part of learning and practice in the AI ​​field.

Explain suggestions

Below we will further elaborate on the basic principles behind the suggestions briefly mentioned above.

1. Diversify and broaden the resources necessary for AI development and deployment —such as data sets, computing resources, education and training use, including expanding opportunities for participation in such development. In particular, focus on the population currently lacking such access.

As many people mentioned during the AI ​​Now Experts workshop, these methods of developing and training AI systems are expensive and limited to a few large companies. Or simply, DIY AI is impossible without a lot of resources. The AI ​​model for training requires a lot of data – the more the better. It also requires huge computing power, which is expensive. This makes it limited to companies that can afford such usage even if basic research is to be conducted, thus limiting the possibility of democratizing the development of AI systems to serve different population goals. Investing in basic infrastructure and using appropriate training data can help to compete fairly. Similarly, opening up development and design processes in existing industries and institutions to diversify internal discipline and external comments can help develop AI systems that better serve and reflect the needs of a diverse environment.

2. Upgrade the definition and framework to embody fair labor behavior to adapt to structural changes that occur when AI management is deployed to the workplace. At the same time, study alternative income and resource distribution, education and retraining models to adapt to the increasing automation of repetitive work in the future and the changing labor and employment trends.

At the AI ​​Now Experts seminar, President Obama Chief Economist Jason Furman pointed out that 83% of jobs with an hourly wage of less than $20 per hour in the United States will face severe pressure on automation. For middle-income jobs with an hourly wage of $20-40 per hour, this is also as high as 31%. This is a huge shift in the labor market, which may lead to a permanent unemployment class. To ensure that the efficiency of AI systems does not cause public uneasiness in the labor market, or the dissolution of important social institutions such as education (there is a possible better way to see education as a better way to work), in which such a huge shift occurs, alternative resource distribution methods and other models of response to the introduction of automation should be thoroughly studied, and policies should be formulated for well-organized various implementation tests to control the possible catastrophic consequences.

In addition to “replacement workers,” AI systems also have multiple other effects on the labor market. For example, they change the power relations, employment expectations, and the role of the job itself. These changes have had a profound impact on workers, so when introducing AI systems, it is important to understand these effects when considering how to express fairness and unfair practices. For example, if a company developing an AI system that actually acts as a management team can be considered a technology service company, unlike employers, employees may not be protected by existing laws.

3. At the design and deployment stage, support research and development methods to measure and evaluate the accuracy and fairness of AI systems . Similarly, support research and development methods to measure and resolve AI errors and damages that occur once they are used, including accountability for notifying, correcting and mitigating these errors and damages caused by automatic decision-making in AI systems. These methods should prioritize notifying people affected by automatic decision-making and develop methods to object to errors or harmful judgments.

AI and predictive systems increasingly determine whether people can get or lose opportunities. In many cases, people do not realize that it is machines, not humans, making life-changing decisions. Even if they realize, there is no standard process for objecting to misdefinitions or rejecting harmful decisions. We need to invest in research and technology prototypes to ensure that fundamental rights and responsibilities are respected in an environment where AI systems are increasingly used to make important decisions.

4. Clarification whether the Anti-Computer Fraud and Abuse Act or the Digital Millennium Copyright Act is not intended to limit research on AI liability . To conduct research required to test, measure, and evaluate the impact of AI systems on decisions on public and private institutions, especially on key social concerns such as fairness and discrimination, researchers must be clearly allowed to test systems across a large number of domain names and through a large number of different methods. However, certain U.S. laws, such as the Anti-Computer Fraud and Abuse Act (CFAA) and the Digital Millennium Copyright Act (DMCA), make it illegal to interact with computer systems that are publicly questionable on the Internet and may limit or prohibit such research. These laws should be clarified or modified to explicitly allow interactions that facilitate such important research.

5. Support basic research on powerful assessment and evaluation methods for the impact of AI systems on socio-economic life in real-world environments . Work with government agencies to integrate these new technologies into their investigation, regulatory and law enforcement capabilities.

We currently lack rigorous practices for assessing and understanding the socio-economic impact of AI systems. This means that AI systems, when integrated into existing socio-economic areas, are deployed in new products and environments, cannot measure or accurately calculate their impact. This situation is similar to conducting experiments without being willing to record results. To ensure the benefits of AI systems, coordinated research and rigorous approaches to understanding the impact of AI systems are necessary, and when used, it can help form standard practices across sectors and within governments. Such research and its results can be compared to early warning systems.

6. When jointly developing and deploying such systems with these people, jointly design accountable AI with community representatives and members affected by automated decision-making applications and AI systems .

In many cases, these people affected by AI systems will be the most authoritative experts on the AI ​​system environment and results. Especially given the lack of diversity in the current AI field, those affected by AI system deployment are actually engaged in providing feedback and design directions, and these suggestions from feedback mechanisms can directly affect the development of AI systems and the broader policy framework.

7. Strengthen action to diversify AI developers and researchers, broaden and integrate all perspectives, environments and disciplines into AI system development. The AI ​​field should also combine computing, social sciences and humanities to support and promote interdisciplinary AI research on the impact of AI systems on multiple perspectives.

Computer science as a discipline lacks diversity. This is especially worse in AI. For example, while some AI academic laboratories are run by women, at the recent Neural Information Processing Systems Conference, only 13.7% of participants were women, and this conference was one of the most important annual conferences in the field. A lack of diversified circles is unlikely to consider the needs and concerns of these people not in it. Understanding these needs and concerns is important when these needs and concerns become the center of socio-economic institutions deploying AI, and AI development reflects these important views. Focusing on developing AI population diversity is key, in addition to gender and representing protected populations, including diversification in various disciplines besides computer science, and development practices that rely on expertise learned from relevant socio-economic fields.

In socio-economic areas outside and within computer science, a thorough assessment of AI impact will require most of this expertise. Since AI is integrated and used in many environments—such as medicine, labor market or online advertising—is itself a rich area of ​​learning. To truly develop a rigorous process for AI impact assessment, we will need interdisciplinary collaboration to establish new research directions and areas.

8. Working with professional organizations such as the American Association for the Advancement of Artificial Intelligence (AAAI), the American Computer Society (ACM), and the Institute of Electrical and Electronics Engineers (IEEE), update (or produce) professional ethics codes to better reflect the complexity of deploying AI and automation systems in the socio-economic field. Citizenship, civil liberties, and ethics training courses for anyone who wants to master computer science reflect these changes in education. Similarly, updating professional ethics codes constrain professionals who introduce AI systems, such as ethics applicable to doctors and hospital workers.

In occupations such as medicine and law, professional behavior is bound by ethical codes of controlled acceptable and unacceptable behavior. Professional organizations such as ACM and IEEE do develop ethical codes, but these are outdated and insufficient to address the specific and often subtle challenges posed by the use of AI systems in complex socio-economic environments. While doctors do adhere to the professional ethics that govern their behavior with patients, the development of AI systems, such as helping doctors to diagnose and treat patients, has emerged with ethical challenges that existing codes of ethics do not always solve. Career guidelines and computer science training must be updated to reflect the responsibility of AI system builders to those who suffer from varying degrees of adverse effects due to the use of these systems. When AI is used to enhance human decision-making, professional ethics should include protections to identify liability when AI systems are affected by conflicts of interest.

As a winner of Toutiao's Qingyun Plan and Baijiahao's Bai+ Plan, the 2019 Baidu Digital Author of the Year, the Baijiahao's Most Popular Author in the Technology Field, the 2019 Sogou Technology and Culture Author, and the 2021 Baijiahao Quarterly Influential Creator, he has won many awards, including the 2013 Sohu Best Industry Media Person, the 2015 China New Media Entrepreneurship Competition Beijing Third Place, the 2015 Guangmang Experience Award, the 2015 China New Media Entrepreneurship Competition Finals Third Place, and the 2018 Baidu Dynamic Annual Powerful Celebrity.

<<:  How difficult is the new model Google is seeking?

>>:  The total box office during the 7-day National Day holiday dropped 15.30% year-on-year: Did Guo Jingming have to hide in the toilet and cry? !

Recommend

iOS/iPad OS 14.8.1 released: Apple recommends upgrading all older iPhone models

Not long after the official release of iOS 15.1, ...

How does Pinduoduo deeply tap into the number of WeChat users?

Pinduoduo pays more attention to the understandin...

How much does it cost to customize the Yongxin Musical Instruments Mini Program?

How much does it cost to customize the Yongxin Mu...

SEO: How much does it cost to build a website?

Nowadays, there is a new form on the Internet alm...

World Water Day: A review of the “water artifacts” invented in ancient China

On January 18, 1993, at the 47th United Nations G...

Human genome sequencing has been completed. What does this mean?

It is very interesting for humans to study their ...

What are the real user needs?

It is said that making products must meet user ne...

Behind the skyrocketing memory prices is an endless war you can’t see

On February 3, 2012, Steve Appleton, CEO of Micro...

All the marketing tactics for Labor Day are here!

Labor Day is here again. Are you ready for this y...

iOS 16 new feature: You can prove to websites that you are not a robot

Apple revealed iOS 16, iPadOS 16, and macOS Ventu...