Artificial intelligence is gradually gaining influence all around us. Whether it is targeting advertising on social media, screening job applicants, determining airfares, controlling central heating systems through voice recognition, creating cultural output, or regulating traffic flow, AI is performing more and more tasks in human life. Musk predicts that Tesla self-driving cars will be able to drive safely across the United States without human involvement by the end of 2017. Social robots living around humans will be able to perform many household or care-giving tasks within a decade.It is widely believed that by 2050, we will be able to make progress in other areas beyond these specific areas, and finally achieve general artificial intelligence (AGI). AGI is part of the new concept operating system released by Microsoft, Singularity. The idea is that computers will be able to surpass humans in any cognitive task, and human-computer integration will become extremely common. What will happen after that is anyone's guess. Do you have an AI strategy? Do you wish you had one? There is an interesting idea of installing computer parts inside the human body, allowing humans to process data faster. Some in the field of artificial intelligence envision a "neural grid" that acts as an extra cortex outside the brain, connecting us to electronic devices with great speed and efficiency. This will be a major innovation in machine parts, that is, electronic pacemakers and titanium alloy joints in "cyborg" bodies. The focus of artificial intelligence in the future will be on military and defense applications, and the concept of fully autonomous weapons is very controversial. Such a weapon system can search, identify, select and destroy a target based on algorithms and learn from past security threats, but without human involvement. It is a pretty scary concept. These visions of a future dominated by artificial intelligence are almost sci-fi dystopias, reminiscent of scenes from The Terminator. Accidental Identification Humanity may still be some way from destruction, but the alarm bells are ringing around the ethics of artificial intelligence. Just last month, machine learning algorithms have come under fire for proactively suggesting that Amazon users build bomb components, reflecting gender inequality in job ads, and spreading hate messages through social media. Much of this error is due to the quality and nature of the data used for machine learning. Machines draw less-than-perfect conclusions from human data. Today, this result raises serious questions about the management of algorithms and the mechanisms of artificial intelligence in our daily lives. Recently, a young American man with a history of mental illness was rejected for a job because his attitude toward an algorithmic personality test was not satisfactory. He believed that he was unfairly and illegally discriminated against, but because the company did not understand how the algorithm worked and the Labor Law does not currently explicitly cover machine decision-making, he did not resort to legal action. China's "social credit" program has also raised similar concerns. Last year, the program collected some data from social media (including friends' posts) to assess the quality of a person's "citizenship" and used it for decisions, such as whether to provide the person with a loan. The need for AI ethics and laws Clear ethical systems for AI operations and regulation are necessary, especially when governments and corporations have priority in certain areas, such as obtaining and maintaining electricity. Israeli historian Yuval Noah Harari discusses the paradox of self-driving cars and the trolley problem. Innovative projects like MIT's Moral Machine attempt to collect data on human ethics for machines. But ethics isn’t the only area where questions about AI and human health arise. AI is already having a significant emotional impact on humans. Despite this, emotion has also been neglected as a topic in AI research. You can browse the 3,542 peer-reviewed articles on artificial intelligence published on the web pages of scientific academic databases in the past two years. Only 43 of them, or 1.2%, contain the word "emotion". Even fewer articles actually describe the research on emotion in artificial intelligence. When considering Singularity systems, emotions should be considered as part of the cognitive structure of artificial machines. However, 99% of artificial intelligence research does not seem to recognize this. Artificial Intelligence Understands Human Feelings When we talk about emotions in AI, we mean several different things. One is the ability for machines to recognize our emotional states and act accordingly. The field of affective computing is developing rapidly, using biometric sensors to test skin reactions, brain waves, facial expressions, and other emotional data. Most of the time, the calculations are accurate. The applications of this technology could be both benign and nefarious. Companies could take feedback based on your emotional response to a movie and sell you related items in real time through your smartphone. Politicians might craft messages that appeal to specific audiences. Social robots might adjust their responses to better help patients in medical or care settings, and digital assistants might help boost your mood with a song. Market forces will drive the field, expanding its reach and refining its capabilities. How do we view artificial intelligence? This is the second area of emotion for AI. Human emotional responses to AI are still a bit off. Humans seem to want to connect with AI in the same way we do with most technology, attaching personalities to inanimate objects, giving appliances a sense of purpose, and projecting emotions onto the technology we use, such as "it's mad at me, that's why it doesn't work," etc. This is called the Media Equation. It involves a kind of doublethink: we intellectually understand that machines are not sentient beings, but we emotionally respond to them as if they have emotions. This may stem from our most basic human need, which is interpersonal and emotional connection, without which humans become depressed. This need drives humans to connect with other people and animals, and even machines. Sensory experience is an important part of this bonding drive and reward mechanism, and is the source of happiness. Fake social When the experience of connection and belonging doesn’t exist in our environment, we replicate it through television, movies, music, books, video games, and anything that provides an immersive social world. This is called the Social Surrogacy Hypothesis, a theory backed by empirical evidence from social psychology that is beginning to be applied to artificial intelligence. Basic human emotions are valid, even when faced with artificial intelligence. Think happiness at a compliment from a digital assistant, anger at an algorithm that rejected your mortgage application, fear at a self-driving car, sadness when Twitter’s AI refused to verify your account (I’m still sad about this one). robot Humans have stronger emotional reactions to embodied AI, meaning robots. The more human-like a robot is, the stronger our emotional response to it is. We are attracted to anthropomorphic robots, express positive emotions toward them, and feel both sympathy and discomfort when we see them get hurt. We even feel sad if they reject us. However, interestingly, if a robot is almost exactly like a human, but not perfectly human, our evaluation of them will suddenly drop, and we will reject them. This is the so-called "uncanny valley" theory, and the resulting design philosophy is to make robots look less human at this stage, unless one day we can make robots exactly like humans. Gentle touch Artificial intelligence is now using haptics, a touch-based experience, to further deepen the emotional bond between humans and robots. Perhaps the most famous example is Paro, a furry seal who is being used in care facilities in different countries. There are many potential uses for social and emotional robots. Some of these include caring for the elderly, helping them to live autonomously, helping people suffering from isolation, and those with dementia, autism or disabilities. Touch-based sensory experiences are increasingly being integrated into immersive technologies such as virtual reality, which is part of this. In other areas, AI may be responsible for tasks such as handling daily household chores or teaching. A survey of 750 South Korean children aged between five and 18 found that while most of them had no problem following lessons taught by AI robots, many expressed concerns about the emotional role played by AI teachers. Can robots provide advice or emotions to students? However, more than 40% are in favor of replacing teachers with AI robots. As Harvard psychologist Steven Pinker says, experiences like the social vicarious experience described above allow us to fool ourselves. We don’t actually experience the social interaction, but we trick our brains into believing that we are, which makes us feel better about it. However, the replica is not as good as the real thing. Conclusion Clearly, people can experience real emotions from interacting with AI. But are we missing out on something more personal, beyond our driverless cars, virtual assistants, robot teachers, cleaners and playmates? The scene is reminiscent of Harry Harlow's famous experiment, where isolated monkeys were given the choice of a soft-furred "mother" rather than a cold wire fence to hand out milk. Can we achieve everything we want technologically and realize that basic human emotional needs and the pleasures of real-world sensory experiences are nonexistent? As for the luxuries of the future, will we pursue the antithesis of mass-produced junk food, namely real sensory experiences and contact with real people, not robots? The answer is, we don’t know yet. But the fact that 99% of AI research doesn’t focus on emotions suggests that if emotions do play a larger role in AI, it’s either as an afterthought or because emotional data gives AI devices and their employers more power and money. The Digital Humanities Project might help us remember that as we move toward Singularity systems and human-machine fusion, we shouldn’t ignore our ancient mammalian brains and their need for emotional bonds. The OpenAI project is a step toward that goal, with the goal of making the benefits of AI accessible to all. So let’s go a step further and consider emotional well-being in the field of AI. Who knows where this will take us? As a winner of Toutiao's Qingyun Plan and Baijiahao's Bai+ Plan, the 2019 Baidu Digital Author of the Year, the Baijiahao's Most Popular Author in the Technology Field, the 2019 Sogou Technology and Culture Author, and the 2021 Baijiahao Quarterly Influential Creator, he has won many awards, including the 2013 Sohu Best Industry Media Person, the 2015 China New Media Entrepreneurship Competition Beijing Third Place, the 2015 Guangmang Experience Award, the 2015 China New Media Entrepreneurship Competition Finals Third Place, and the 2018 Baidu Dynamic Annual Powerful Celebrity. |
Principal Lao Li's live broadcast room 2021 r...
Speaking of soup Many people think All the nutrie...
From ancient times to the present, countless cult...
Proper use of push can help product operators ach...
The copywriting is obviously very well written, s...
Because there is an extra fourth month in the lun...
Kuaishou - "Record the world, record you&quo...
The Big Snow solar term is the 21st solar term in...
Last month, under the ban of the State Administra...
With the economic downturn, many people have begu...
Since the advent of dual-SIM dual-standby mobile p...
The 2022 Winter Olympics and Paralympics have come...
Gazelle, a well-known second-hand electronic cons...
In addition to iOS 8 and OS X 10.10 beta versions,...
Did you know? 200ml of blood per donation (whole ...