In the biological sense, the word nightshade is often used to refer to a highly poisonous plant. But recently, it has been given a new meaning. A computer science team at the University of Chicago has developed a data poisoning tool called "Nightshade" to help artists fight back against unauthorized use of their works by artificial intelligence. The invention was published in the team's latest paper "Fast and Specific Poisoning Attacks for Text-to-Image Generation Models." What is "Solanum nigrum"? The Nightshade data poisoning tool allows artists to alter pixels in their work in subtle ways that are invisible to the naked eye. In this case, the AI program can be misled into understanding the image in a way that is inconsistent with human interpretation. “(Such changes) could turn a dog into a cat, a car into a cow, and the list goes on,” MIT Technology Review explained in an article about the Nightshade data poisoning tool . This method is defined in academia as "data poisoning," which can undermine the performance of generative artificial intelligence tools, causing them to produce undesirable or inaccurate outputs, thereby causing the generated images to deviate from user needs. The report added that artificial intelligence tools typically extract data from billions of online images. Therefore, in this huge data context, the more frequently the "Solanum" data poisoning tool is used, the higher the possibility of artificial intelligence poisoning. The entire process of the "Solanum" data poisoning tool. Image: University of Chicago Why develop "Solanum nigrum"? Talking about the origin of developing the "Solanum" data poisoning tool, project leader Ben Zhao, a professor of computer science at the University of Chicago, said that artificial intelligence companies often use artists' works to train their own image generation tools without permission, regardless of the artists' copyrights and intellectual property rights. He hopes that the "Solanum" data poisoning tool can help punish artificial intelligence companies for their disrespect for artists' copyrights and intellectual property rights, and tilt the balance of power back towards artists. This behavior of AI companies not respecting artists' copyrights and intellectual property rights may be common. According to a report by the technology news website VentureBeat, artists, performers and record companies have filed lawsuits against several AI companies on the grounds that these companies used artists' works to train their AI tools without permission . OpenAI, which developed ChatGPT, is among those being sued. How will "Solanum nigrum" affect the artificial intelligence industry? The author of the commentary in MIT Technology Review also said that by disrupting or influencing the behavior of artificial intelligence programs that generate images, "Solanum" may become an important initiative to push artificial intelligence companies to take actions such as paying copyright and more respect for artists' creative rights and copyrights, thereby having important social and industry impacts. It is true that artists have a certain say in the current development of artificial intelligence. For example, some artificial intelligence companies have established systems or processes that enable artists to decide whether their works can be included in the training data of the model. However, even if the artist does not allow it to be used, artificial intelligence companies still retain considerable influence on the use of artistic content. At present, the voice of artists is far from enough. Generative AI: A global emerging technology Focusing on technologies that are expected to have a positive impact on society in the next three to five years, the World Economic Forum released the "Top Ten Emerging Technologies in 2023" report . The report lists the 10 most promising emerging technologies in 2023, among which generative artificial intelligence ranks second. “While generative AI is currently used primarily in areas such as generating text, computer programming, images, and sound, the technology has the potential to be used in fields as diverse as drug design, architecture, and engineering,” the report states. However, as generative AI becomes increasingly integrated into multiple aspects of people’s daily lives, the report also emphasizes that applications should meet agreed professional and ethical standards to build public trust. Author of this article: Victoria Masterson, Senior Writer, Forum Agenda This article originally appeared on the World Economic Forum's Agenda blog |
<<: Save 700 yuan! iPad mini 2 turns into mini 3 in seconds
>>: Don’t blame the router for factors that affect Wi-Fi signal coverage
Some people ask, since there was no such thing as...
Recently, the China Automobile Dealers Associatio...
Produced by: Science Popularization China Author:...
As the core of conversion, the importance of land...
one, introduction In this article, we will use We...
I used my grandfather’s written records and audio...
Looking back at 2019, various popular IP content ...
When Tencent was founded in 1998, it was actually...
On January 30, 2024, Elon Musk posted a message o...
It has become an irreversible trend for automobil...
With the continuous development of the APP indust...
A few months ago, a friend who works in new media...
Have you ever seen translucent colloids like &quo...
As Samsung loses its grip on the smartphone marke...
Eating more whole grains is good for maintaining ...