Using AI to predict AI, what will its future be?

Using AI to predict AI, what will its future be?

Artificial intelligence has begun to solve more and more problems that humans have yet to solve, and has achieved good results.

However, over the past few years, the amount of scientific research in the field of AI has grown exponentially, making it difficult for scientists and practitioners to keep track of these advances.

Data shows that the number of research papers in the field of machine learning doubles every 23 months. One of the reasons is that artificial intelligence is being used in different disciplines such as mathematics, statistics, physics, medicine and biochemistry.

Tools that propose new personalized research directions and ideas by gaining insights from scientific literature can significantly accelerate scientific progress. In the process of intersecting artificial intelligence with other fields, how can people judge which directions are meaningful and worth pursuing?

To this end, an international team led by Mario Krenn, an artificial intelligence scientist at the Max Planck Institute for the Science of Light (MPL), published a study on "High-Quality Link Prediction in an Exponentially Growing Knowledge Network". The relevant research paper was titled "Predicting the Future of AI with AI: High-Quality Link Prediction in an Exponentially Growing Knowledge Network" and was published on the preprint website arXiv.

(Source: arXiv)

The goal of this research effort is to design a program that can "read, understand, and then act" on AI-related literature, thereby opening the door to predicting and suggesting cross-disciplinary research ideas. The research team believes that in the long run, this will increase the productivity of AI researchers, open up new research avenues, and guide progress in the field.

Past practice has shown that new research ideas are often generated by establishing new connections between seemingly unrelated topics/fields.

This prompted the research team to formulate the evolution of AI literature as a temporal network modeling task and create a semantic network that can describe the content and evolution of AI literature since 1994.

The research team also explored a network containing 64,000 concepts (also called nodes) and 18 million connections between nodes, and used the semantic network as input to 10 different statistical and machine learning methods.

One of the most fundamental tasks – building semantic networks – helps to extract knowledge from the web and subsequently process it using computer algorithms.

Figure | In this work, the research team used 143,000 artificial intelligence and machine learning papers published on arXiv from 1992 to 2020, and used RAKE and other NLP tools to build a list of concepts. These concepts constitute the nodes of the semantic network, and when two concepts appear in the title or abstract of a paper at the same time, a boundary (edge) is drawn. In this way, they built an evolving semantic network, with more concepts being studied together over time. The ultimate task is to predict that unconnected nodes, that is, concepts that have not been studied together in the scientific literature, will be connected within a few years. (Source: arXiv)

Initially, the research team considered using large language models such as GPT-3 and PaLM to create such a network. However, the main challenge is that these models are still difficult to reason about and have difficulty recognizing or proposing new concept combinations.

So they turned to an approach borrowed from biochemistry , which is to create knowledge networks from concepts that co-occur in scientific papers; a single biomolecule represents a node, and when a paper mentions two corresponding biomolecules, the two nodes are connected. This approach was first proposed by Andrey Rzhetsky, professor of medicine and human genetics at the University of Chicago, and his team.

Using this approach, the research team captured the history of the field of AI and used supercomputer simulations to extract important statements about the collective behavior of scientists, repeating this process over and over again based on a large number of papers to form a network that captures actionable content.

Based on this, the research team developed a new benchmark called Science4Cast and provided ten different methods to solve this benchmark. The research team believes that their work will help build a new tool that can predict trends in artificial intelligence research .

In the past, whenever one opened any AI and machine learning related forum, one would find that “keeping up with the advancement of AI” was the primary topic of discussion.

Perhaps this research can relieve some of this pressure for people.

Paper link:

https://arxiv.org/pdf/2210.00881.pdf

<<:  Many people say it is a "hidden killer"! Seriously, everyone should be careful...

>>:  Is this plant called "Wintersweet" or "Wintersweet"?

Recommend

5 tips for brand advertising and marketing!

What kind of advertising is considered a "go...

Counterpoint: Global connected car sales to grow 8% year-on-year in 2024

According to the latest report from Counterpoint,...

Share this! Don’t go! Don’t go!

"High-salary overseas job openings" and...

The first stage of product growth: user acquisition

The user growth framework originates from the con...

Help! Why is someone feeding meat to Rose?

The editor sent me a picture, and my fruit flies ...