More than 1,600 scientists participated, and only 4% believed that AI tools are currently "necessary"

More than 1,600 scientists participated, and only 4% believed that AI tools are currently "necessary"

Over the past decade, the number of research papers on artificial intelligence (AI) has increased significantly across all fields.

Scientists have begun using AI tools to help summarize and write research papers and code. Some researchers are trying to use generative AI technology to explore new areas, such as the discovery of protein structures, improved weather forecasts, and innovations in medical diagnosis, among other promising areas.

AI has already permeated scientific research. So, what do scientists think of it?

Recently, the top journal Nature conducted a survey of more than 1,600 researchers around the world. The results show that AI tools are becoming more and more common in the scientific field, and many scientists expect them to soon become the core of research practice. In addition, more than half of the respondents believe that AI tools will become very important or indispensable in the next decade.

The relevant survey results have been published in Nature under the title "AI and science: what 1,600 researchers think".

In the survey, two-thirds of respondents believed that AI provides a faster way to process data, 58% believed that AI speeds up calculations that were previously infeasible, and 55% mentioned that AI saves time and money.

“AI has allowed me to make progress on biological problems that were previously intractable,” says Irene Kaplow, a computational biologist at Duke University.

However, 69% of researchers said AI tools could lead to greater reliance on pattern recognition rather than deep understanding, 58% believed AI could reinforce bias or discrimination in data, 55% believed these tools could increase the likelihood of fraud, and 53% pointed out that careless use could make research irreproducible.

“The main problem is that AI is challenging our existing standards of evidence and truth,” says Jeffrey Chuang, who works on cancer image analysis at the Jackson Laboratory in Connecticut.

Scientists' concerns and excitement

To assess the views of active researchers, Nature contacted more than 40,000 scientists who published papers in the last four months of 2022 by email, and invited readers of Nature Briefings to participate in the survey.

Of these respondents, 48% directly develop or research AI, 30% use AI in research, and the remaining 22% do not use AI in science.

Of those who use AI in their research, more than a quarter believe AI tools will become essential in the next decade, while only 4% believe they are a "necessity" now. Another 47% believe AI will become very useful. However, researchers who don't use AI are less enthusiastic. Even so, 9% still believe these technologies will become essential in the next decade, and another 34% say they will be very useful.

When asked to choose from a list of possible negative impacts of generative AI, 68% of researchers were concerned about inaccurate information dissemination, another 68% believed it would make plagiarism easier and harder to detect, and 66% were concerned about the introduction of errors or inaccuracies into research papers.

In addition, respondents mentioned concerns about falsified research, false information, and bias if AI tools used for medical diagnosis are trained on data with historical biases. Scientists have already seen evidence of this: for example, a team in the United States reported that when they asked GPT-4 to provide diagnoses and treatment recommendations for clinical case studies, the answers changed depending on the patient’s race or gender.

“Large language models (LLMs) are being abused to produce inaccurate and fake but professional-sounding results,” said Isabella Degen, a software engineer and former entrepreneur who is pursuing a PhD in medical AI at the University of Bristol in the UK. “In my opinion, we don’t have a clear enough understanding of the line between correct use and abuse.”

The most obvious benefit, the researchers say, is that LLMs could help non-English speaking researchers improve the grammar and style of their research papers, summarizing or translating other work. “Although there is a small minority of malicious players, the academic community can show how to use these tools for good,” says Kedar Hippalgaonkar, a materials scientist at the National University of Singapore.

Even among researchers interested in AI, those who regularly use LLMs in their work are still in the minority. 28% of those studying AI report using generative AI products on a daily or weekly basis, while 13% of those who only use AI do so and only 1% of others, although many have at least tried these tools. Furthermore, the most popular use across all groups is for creative entertainment unrelated to research; smaller groups use the tools to write code, develop research ideas, and help write research papers.

In addition, some scientists are not satisfied with the output of LLMs. A researcher who uses LLMs to help edit papers wrote: "ChatGPT seems to copy all the bad writing habits of humans." Physicist Johannes Niskanen of the University of Turku in Finland said: "If we start using AI to read and write articles, science will soon change from 'for humans by humans' to 'for machines by machines'."

AI development faces difficulties

About half of the scientists in the survey said they had encountered obstacles in developing or using AI. The top concerns for researchers working directly on AI include insufficient computing resources, insufficient financing for their work, and insufficient access to high-quality data needed to run AI. Those who work in other fields but use AI in their research are more concerned about a lack of skilled scientists and training resources, and they also cite security and privacy concerns. However, researchers who don't use AI say they don't need it or don't think it's practical, or they lack the experience and time to work on it.

Another theme that emerged in the survey was that commercial companies dominate ownership of AI computing resources and AI tools. While 23% of scientists who study AI say they collaborate with or work for companies that develop these tools (Google and Microsoft were most often mentioned), only 7% of those who use AI do so. Overall, just over half of respondents said it is very or somewhat important that researchers using AI collaborate with scientists at these companies.

Researchers have repeatedly warned that naive use of AI tools in science could lead to errors, false positives and irreproducible findings, potentially wasting time and effort. Some scientists have expressed concerns about the presence of poor-quality research in papers using AI.

“Machine learning can be useful sometimes, but AI raises more questions than it helps,” said Lior Shamir, a computer scientist at Kansas State University in Manhattan. “Scientists using AI without understanding what they are doing can lead to spurious discoveries.”

When asked whether journal editors and peer reviewers are adequately reviewing papers that use AI, respondents were split. Among scientists who work with AI but are not directly developing it, about half said they didn’t know, a quarter said the review was adequate, and a quarter said it was inadequate. Those who are directly developing AI tended to have a more positive view of the editorial and review process.

In addition, Nature also asked respondents about their level of concern about seven potential impacts of AI on society, and two-thirds said they were very or quite concerned about them. Autonomous AI weapons and AI-assisted surveillance also topped the list, with the least worrying being the idea that AI could pose an existential threat to humanity.

Yet many researchers say AI and LLMs are inevitable. “ AI is transformative, and we must now focus on ensuring that it brings more benefits than problems,” writes Yury Popov, a liver disease specialist at Beth Israel Deaconess Medical Center in Boston, Massachusetts.

Reference Links:

https://www.nature.com/articles/d41586-023-02980-0

Author: Yan Yimi Editor: Academic Jun

<<:  Why is he so cold-resistant? Because he is missing something...

>>:  How long is one meter? Have you ever thought about this question seriously?

Recommend

How to plan a successful event promotion?

I believe everyone is familiar with event promoti...

What does a tree sparrow look like?

When it comes to wild birds, sparrows are probabl...

ROM Features Comparison

In addition to the rich applications that smartpho...

Zhu Kezhen and Hu Shi's "Gamble" on Health

Master's Past The seventh of this month marks...

2021 Information Feed Advertising Trends: Video Materials

This article will talk to you about the video mat...

How to promote and operate App?

In recent years, mobile Internet has developed ra...

It is a "Beijing Drifter" and has gray hair at a young age.

For most struggling "Beijing Drifters",...

How to choose OCPC's bidding method?

When talking about search promotion now, we canno...

Tik Tok data-based operation, promotion and analysis skills!

Today I will give you an in-depth analysis of Dou...