AI can read people's minds! How long can you keep your little secrets secret?

AI can read people's minds! How long can you keep your little secrets secret?

Audit expert: Zheng Yuanpan

Professor of Zhengzhou University of Light Industry

I don’t know if you still remember that in Ant-Man 2 directed by Peyton Reed, when the villain wanted to find out the whereabouts of the protagonist Scott Lang, he injected his friend Louis with a dose of “truth serum”. This injection of medicine directly made Louis tell everything, almost shaking out the protagonist’s emotional life.

Source: Stills from Ant-Man and the Wasp

Recently, scientists from the University of Texas published a paper in a Nature journal, in which they mentioned that with the help of a large language model, they have developed a modern "mind reader". Although it cannot make you "tell the truth" directly, it can convert your brain activity into clear images or express it directly in language.

Researchers prepare to collect brain activity data Source: Nolan Zunk/The University of Texas at Austin

Originally, movies were just movies, but modern "mind-reading machines" have broken people's cognition. Before the birth of this "mind-reading machine", the closest device to a "truth serum" should be a lie detector, but it also indirectly reflects whether a person is lying through emotional fluctuations such as heartbeat and brain waves. So how does this "mind-reading machine" do it?

1 How can AI directly read thoughts?

Direct mind reading, or "mind decoding," is the process of being able to extract and interpret thoughts directly from the brain. This process involves, for example, recognizing patterns of neuronal activity and associating those patterns with specific thoughts or perceptions.
On May 1, 2023, researchers from the University of Texas at Austin published a research paper titled "Semantic reconstruction of continuous language from non-invasive brain recordings" in the journal Nature Neuroscience.

The study developed a new artificial intelligence system called a semantic decoder that can non-invasively translate the stories and even images in the participants' brains into a continuous stream of text by simply analyzing functional magnetic resonance imaging (fMRI) data. This system may help people who are mentally conscious but unable to speak (such as stroke patients and deaf people) to communicate clearly.

Interestingly, this research work partly relies on Large Language Models (LLM), which is the basis of the recently popular artificial intelligence chat software ChatGPT.

Source: Screenshot of a paper published in Nature Neuroscience

In the training system phase, scientists asked volunteers to lie in an fMRI machine and listen to podcast stories from headphones while undergoing fMRI scans. In the training phase, researchers used a large language model based on GPT-1 to link the brain activity shown in the participants' fMRI data with the language features in the podcast stories.

Semantic decoder training flow chart Source: Semantic reconstruction of continuous language from non-invasive brain recordings

After the volunteers received dozens of hours of podcast stories, the researchers asked the volunteers to listen to a completely new story. This time, they asked the decoder to output language to describe the story the volunteers heard based on the volunteers' brain activity. The experimental results showed that this system can describe the story the volunteers heard based only on the fMRI data of the volunteers. Although it cannot reach 100% accuracy, it can read the general idea of ​​the story in the volunteers' brains.

The sentences heard by the volunteers (left) and the sentences interpreted by the decoder based on brain activity (right). Blue represents exactly the same words, and purple represents roughly accurate word sources丨Semantic reconstruction of continuous language from non-invasive brain recordings

In addition to being able to interpret text stories, this system can also decode image content. In the experiment, the researchers asked participants to watch a short animated video without voice or text, and then the decoder could output language describing the events in the video based on the participants' brain activity.

In the experiment, the decoding system described the story happening in the video that the volunteers saw in words. Source: Semantic reconstruction of continuous language from non-invasive brain recordings

But this study also brings up a new privacy issue: Can someone with bad intentions steal other people's thoughts through the semantic decoder? Professor Alexander Huth, the corresponding author of the paper, said that the semantic decoder is tailor-made and requires more than ten hours of training before use. Volunteer subjects need to remain completely still and focus their minds on the story they are listening to in order for the system to really work. In addition, the research team also tested the system on people who had not received training, and the decoding results were far from the actual results.

2 Current progress

Although we are still far from fully decoding the human mind, we have made some key progress. For example, in March of this year, scientists from Kyoto University in Japan have successfully used AI to decode the images that people see or imagine in their brains, and even the language they are listening to or imagining. The research paper was published in the online version of the Journal of Biology. The authors said: "Here, we propose a new image reconstruction method in which the pixel values ​​of an image are optimized to make its deep neural network features similar to multiple levels of decoding of human brain activity." "Although our model is only trained on natural images, our method successfully generalizes the reconstruction process to artificial shapes, indicating that our model does indeed 'reconstruct' or 'generate' images of brain activity, rather than simply matching samples."

Source: See watermark

The core of this technology is that when a person observes an image, he or she will copy the image in his or her mind. AI uses a specific algorithm to associate the fMRI signals of volunteers with the DNN (Deep Neural Networks) features of the image, and then trains AI that can read your thoughts. However, it should be emphasized that most current research relies on crude brain imaging technology and limited data. Therefore, the types of thoughts that can be decoded and the accuracy are still limited.

The first row is a real picture, and the last three rows are AI-derived pictures. Source: see watermark

In addition to accuracy, latency is also an issue that needs to be addressed.

It takes time for the instrument to read and process information when interacting with humans, and different people's brains have different activity patterns, which is also a big challenge for the system's processing performance. In addition, fMRI requires participants to lie in a specific machine to collect reliable data. This harsh condition requirement will bring a big obstacle to future promotion.

3 Future Possibilities

In the future, as brain imaging technology advances and more data becomes available, we will be able to train AI that can decode more complex and sophisticated thoughts. This may not only enable us to understand the brain and mind more deeply, but may also open up new ways of communication, such as helping people with mobility difficulties to control interfaces directly with their thoughts, or understanding the thoughts of people with language barriers.

Source: pixabay

However, AI that can read minds also brings up a series of moral and ethical issues. For example, who has access to our private thoughts? How should we protect the privacy of our thoughts? These are issues we must face and solve when developing this technology.

At present, we don’t have to worry about the threat this technology will bring to human privacy, because the success of each experiment requires the high cooperation of volunteers. If you don’t want your thoughts to be read by the machine, you can set up an impeccable barrier by distracting yourself.

In general, AI directly reading human thoughts is a field full of challenges and opportunities. Like all other innovative technologies, it is a double-edged sword. It can provide benefits to humans, but it may also deprive humans of their privacy and cause huge ethical issues. How to develop it and how to use it are both worthy of our in-depth thinking and discussion.

<<:  Protect the esophagus, start with me

>>:  A must-read for those who sit for long periods of time! Come and see if you are "sitting" correctly?

Recommend

Why do I feel more beautiful after taking a shower?

Reviewer of this article: Zhou Xiaobo, Doctor of ...

APP offline promotion and operation strategy!

Someone will definitely ask, APP is an online pro...

Why did dinosaurs escape the end-Triassic mass extinction? The answer is found

◎ Science and Technology Daily reporter Zhang Ye ...

How to do SEO optimization for industry websites? How to rank first?

How to do SEO optimization for industry websites?...

Short video operation: short video script creation skills

I don’t know when it started, but short videos su...

Pink pink pink pink...pink attack, are you ready?

The recently released "Barbie" created ...

Get App | Operational Strategy Analysis Report

Since its launch in 2016, the Duode APP has devel...

60 Lectures on Chinese Literature Masterpieces Baidu Cloud Download

60 Lectures on Chinese Literature Masterpieces Re...

How to operate and promote industrial Internet products?

Unlike consumer Internet, industrial Internet int...

Wedding photography advertising promotion case!

After the epidemic, the May Day holiday is approa...