Unable to avoid information manipulation, is the human brain outdated?

Unable to avoid information manipulation, is the human brain outdated?

Why are humans so susceptible to believing obvious nonsense when there is no hard evidence? Why do we so easily accept irrational beliefs when there is clear evidence? Part of the answer may lie in the millions of years our brains have evolved to make quick decisions about uncertainties that could threaten our lives.

This article is excerpted from "Who is Rolling the Dice: Uncertain Mathematics" (People's Posts and Telecommunications Press, July 2022 edition). The title is added by the editor and some parts have been deleted.

By Ian Stewart

Translation | He Sheng

Many aspects of brain function can be thought of as some kind of decision making. When we look out into the world, our visual system has to figure out what objects it sees, guess what those objects are like, assess their potential for threat or benefit, and get us to act on those assessments. Psychologists, behavioral scientists, and AI workers agree that in some important ways the brain is a lot like a Bayesian decision machine. It embodies beliefs about the world that are either temporarily or permanently wired into its structures, which make it make decisions that closely resemble the outcomes that emerge from Bayesian probability models. (I said earlier that our intuitions about probability are generally pretty poor. That doesn’t contradict what we’re saying here, since the inner workings of these probability models aren’t consciously accessible.)

The idea that the brain is Bayesian explains many other features of the human response to uncertainty. In particular, it helps explain why superstition takes root so easily. Bayesian statistics explains that probabilities are degrees of belief . When we assess a probability as 50-50, we are actually saying that we are as willing to believe it as we are unwilling to believe it. So our brains have evolved to embody beliefs about the world that are either temporarily or permanently hardwired into the brain's structure.

It's not just the human brain that works this way. We can trace our brain structures back to the distant past, to our evolutionary ancestors, mammals and even reptiles. Those creatures' brains also embodied "beliefs." Not the kind of beliefs we verbalize today, like "breaking a mirror brings seven years of 'bad luck.'" Most of our own brain beliefs aren't like that, either. I mean beliefs like "if I stick my tongue out like this, I'm more likely to catch a fly," which are written into the areas of the brain that activate the muscles involved. Human language adds an extra layer to beliefs, making it possible to express them and, more importantly, to communicate them to other people.

To build a simple but informative model, let's imagine an area of ​​the brain that contains many neurons. They can be connected by synapses, which have "connection strengths". Some neurons send weak signals, some send strong signals, and some don't exist at all, so they don't send any signals. The stronger the signal, the greater the response of the receiving neuron. We can also express the strengths as numbers, which is useful when we elaborate on the mathematical model: in some units, maybe a weak connection has a strength of 0.2, a strong connection has a strength of 3.5, and a non-existent connection has a strength of 0.

When a neuron responds to an incoming signal, its electrical state changes rapidly—it "fires." This creates an electrical impulse that can be passed on to other neurons, and which neurons are determined by the network's connections. A neuron fires when an incoming signal pushes its state above a certain threshold. And there are two different types of signals: excitatory, which makes the neuron fire, and inhibitory, which makes the neuron stop firing. It's as if the neuron sums up the strength of the incoming signals, with excitatory signals being positive and inhibitory signals being negative, and only when the sum is large enough does the neuron fire.

In a newborn’s brain, many neurons are randomly connected, but over time, certain synapses change their strength. Some synapses may be removed entirely, and new ones may grow. Donald Hebb discovered a pattern of “learning” in neural networks that is now called Hebbian learning. “Nerve cells that fire together wire together,” meaning that if two neurons fire in near-synchronous fashion, the strength of the connection between them grows. In the context of Bayesian belief, the strength of the connection represents the brain’s belief that when one neuron fires, the other should fire, too. Hebbian learning reinforces the brain’s belief structure.

Psychologists have found that when people are told new information, they don’t just register it in their heads. From an evolutionary perspective, this would be disastrous, because it’s not a good idea to believe everything you’re told. People lie and try to mislead others, usually to control them. Nature lies too, and upon closer analysis, a dangling leopard’s tail may just be a dangling vine or fruit, and a stick insect may pretend to be a branch. So when we receive new information, we evaluate it against our existing beliefs. If we’re smart enough, we evaluate the information’s credibility. If the source of the information is reliable, we’re more likely to believe it; if the source of the information is unreliable, then we’re less likely to believe it. Whether we accept new information and change our beliefs based on it is the result of our inner weighing of factors such as what we already believe, how they relate to the new information, and how much we trust the new information to be true. This weighing usually happens subconsciously, but we can also make conscious deductions about the information.

In a bottom-up explanation, what happens is that complex arrays of neurons are firing, sending signals to each other. How these signals cancel each other out or reinforce each other determines whether new information is accepted, and the strength of the connections changes accordingly. This already explains why it is so hard to convince "true believers" that they are wrong, even when the evidence seems overwhelming to everyone else. If someone has a strong belief in UFOs and the US government releases a news story explaining that a supposed sighting was actually a balloon experiment, their Bayesian brain will almost certainly treat this explanation as propaganda. The news will likely reinforce their belief that they don't trust the US government on this issue, and they will be glad that they didn't believe the US government's lies. Beliefs go both ways, so, usually in the absence of independent verification, those who don't believe in UFOs will accept this explanation as fact, and the information will reinforce their belief that they don't believe in UFOs. They will be glad that they were not so easily gullible to believe that UFOs exist.

Human culture and language make it possible to transfer belief systems from one brain to another. The process is neither precise nor reliable, but it works. Depending on the beliefs and the people studying it, the “process” can be called anything from “education” to “brainwashing” to “raising children to be good.” Children’s brains are plastic, and their ability to evaluate evidence is still developing: think Santa Claus, the Tooth Fairy, and the Easter Bunny—though children are smart, many know they have to “act” to get a reward. There is a saying: “Let me train a child until he is seven, and I will mold him for life.” This can mean two things: one is that what is learned at an early age lasts the longest; the other is that exposing children to a certain belief system will make them stick with them into adulthood. Both may be true, and in some ways they are the same.

The Bayesian brain theory has its roots in many scientific fields: in addition to the obvious Bayesian statistics, it also includes machine intelligence and psychology. In the 1860s, Hermann Helmholtz, a pioneer in the physics and psychology of human perception, proposed that the brain constructs cognition by building a probabilistic model of the external world. In 1983, Geoffrey Hinton, working in the field of artificial intelligence, proposed that the human brain is a machine that makes decisions about the uncertainties it encounters when observing the external world. In the 1990s, this idea became a mathematical model based on probability theory, which includes the concept of the Helmholtz machine. It is not a mechanical device, but a mathematical abstraction consisting of two mathematically modeled "neuron" networks. One is a bottom-up recognition network, which uses real data as training objects and represents it through a set of hidden variables. The other is a top-down "generation" network, which generates values ​​of these hidden variables and thus obtains data. The training process uses a learning algorithm to modify the structure of the two networks so that they can accurately classify data. These two networks are modified in turn, and the whole process is called the wake-sleep algorithm.

"Deep learning" has more layers of similar structures, and it has achieved considerable success in the field of artificial intelligence. Its applications include computer recognition of natural language and computer victory in the Chinese game of Go. Before this, it had been proven that playing checkers with a computer would always result in a draw, even if the game was played perfectly. In 1996, IBM's "Deep Blue" challenged chess grandmaster and world champion Garry Kasparov, but it lost 4:2 in a 6-game match. After being greatly improved, "Deep Blue" won the subsequent game with

However, these programs all use brute force algorithms, not the artificial intelligence algorithms used to win at Go.

Go originated in China more than 2,500 years ago. It is a game played on a 19×19 board that is seemingly simple but actually unfathomable. Two players, each holding black and white pieces, take turns placing their pieces on the board to encircle and capture the other player's pieces. Whoever encircles the largest area wins. There is very limited mathematical analysis of Go. David Benson developed an algorithm that can determine when a piece cannot be encircled no matter how the opponent places it. Elwyn Berlekamp and David Wolfe analyzed the complex mathematics of the end of a game, when many positions on the board are occupied and the places where the pieces can be placed are more complicated than usual. At that stage, the game has actually split into several nearly independent areas, and the players must decide which area to place their pieces next. Their mathematical techniques link each position to a numerical value - or a deeper structure, and combine these values ​​to provide some rules for winning.

In 2015, Google's DeepMind tested a Go algorithm called AlphaGo, which was based on two deep learning networks: a value network that determines the advantage of the board, and a strategy network that determines the next move. These networks were trained using games played by human masters and by algorithms. AlphaGo then played against top professional player Lee Sedol and won 4:1. The programmers found the reason why AlphaGo lost a game and corrected the strategy. In 2017, AlphaGo defeated Ke Jie, the world's number one player, in a three-game match. AlphaGo's "playing style" has an interesting feature that shows that deep learning algorithms do not need to work like the human brain. It often puts chess pieces in positions that human players would never consider - and eventually wins. Ke Jie said: "Humans have evolved through thousands of years of actual combat exercises, but computers tell us that humans are all wrong. I don't think anyone has even touched the edge of the truth of Go."

It does not make logical sense that AI should work in the same way as human intelligence, which is one reason for the adjective “artificial”. However, the mathematical structures embodied by these electronic circuits have some similarities with the cognitive models of the brain developed by neuroscientists. Therefore, a creative feedback loop has emerged between AI and cognitive science, with each borrowing ideas from the other. Sometimes, to some extent, our brains and artificial brains seem to work using similar structural principles. However, in terms of the materials they are made of and the way signal processing is done, they are of course very different.

Most of us have experienced uncertainty at some point: “Where am I?” In 2005, neuroscientists Edvard and May-Britt Moser and their students discovered that the rat brain has a special type of neuron, called a grid cell, that models the rat’s location in space. The grid cells are located in a region of the brain with a somewhat awkward name: the dorsal-caudal medial entorhinal cortex. It is a core processing unit for location and memory. Like the visual cortex, it has a layered structure, with different excitation patterns between layers.

The scientists implanted electrodes into the brains of mice and then let them move freely in an open space. As the mice moved, they monitored which cells in the mice's brains fired. It turned out that specific cells fired whenever the mice were in one of many small patches of space ("fire zones") that formed a hexagonal grid. The researchers theorized that these nerve cells built a mental representation of space, a cognitive map in some kind of coordinate system that told the mice's brains where they were. The activity of the grid cells was constantly updated as the animals moved. Some cells fired all the time, no matter which direction the mice went; others were associated with direction and therefore responded to it.

It’s not clear how the grid cells tell the mouse where it is. Interestingly, the geometric arrangement of grid cells in the mouse brain is irregular. These layers of grid cells somehow “calculate” where the mouse is by integrating the tiny movements it makes as it wanders around. Mathematically, this process can be implemented using vector calculations, in which the position of a moving object is calculated by adding up many tiny changes that have magnitude and direction. Before better navigation instruments were invented, sailors basically navigated using this “dead reckoning” method.

We know that the network of grid cells can work without any visual input, because the pattern of firing does not change even in total darkness. However, it responds very strongly to visual input. For example, suppose a mouse runs in a cylindrical cage with a card on the wall as a reference point. We select a particular grid neuron and measure the corresponding grid of small patches of space. We then rotate the cylinder and measure it again, with the grid rotated the same way. When we place the mouse in a new environment, the grid and its spacing do not change. The whole system is very robust, regardless of how the grid cells calculate their positions.

In 2018, Andrea Banino and his colleagues published how to use deep learning networks to perform similar navigation tasks. Their network has many feedback loops, because navigation seems to rely on the output of one processing step as the input of the next step, but in fact it is a discrete dynamical system with the network as an iterative function. They trained the network using the paths taken by various rodents (such as rats and mice) when foraging, supplemented by information that other parts of the brain might send to grid neurons.

The network learned to navigate effectively in a variety of environments and could transfer its learning to new environments without loss of performance. The team tested it with a specific goal and also tested its ability to navigate mazes in a more advanced environment (the entire setting was simulated in a computer). They used Bayesian methods to assess statistical significance and fit the data to a mixture of three different normal distributions.

One notable conclusion is that as learning progressed, an intermediate layer in the deep learning network developed activity similar to that of grid neurons, which became excited when the animal was in a certain area of ​​a grid made up of small patches of space. A detailed mathematical analysis of the network structure showed that this was some kind of simulated vector calculation. There was no reason to assume that the network would write down vectors and add them up like a mathematician would. Nevertheless, their results support the theory that grid cells are essential for vector-based navigation.

More generally, the "circuits" that the brain uses to understand the outside world are, to some extent, models of the outside world. The structure of the brain has evolved over hundreds of thousands of years to "wire" the information around us. As we know, it also changes in a shorter period of time, learning to "optimize" the structure of the connections. What we learn is limited by education. Therefore, if we are instilled with certain beliefs from a young age, they will be deeply rooted in our brains. This can be seen as a neuroscientific verification of the aphorism mentioned above.

As a result, cultural beliefs are strongly conditioned by our upbringing. We identify our place in the world and our relationships with those around us through the hymns we know, the teams we root for, the music we play. For most of us, the "beliefs" that are etched into our brains are not so different from things that can be rationally debated using evidence. However, unless we recognize the difference, the beliefs we hold that are not supported by evidence are likely to be problematic. Unfortunately, these beliefs are very important in our culture, which is one of the reasons they persist. Beliefs based on faith rather than evidence are very effective in distinguishing "us" from "them." Yes, we all "believe" that 2+2=4, so it doesn't make you or me different. But do you pray to the cat goddess every Wednesday? I don't think you do. You are not part of "us."

This worked well when we lived in small groups, as almost everyone we met would pray to the cat goddess, and those who did not would likely be warned off. However, even if this behavior was extended to a group, it could cause conflict and often violence. In today's interconnected world, it is becoming a major disaster.

Populist politics is now using the new term “fake news” to describe what was once called “lies” or “propaganda.” It’s getting harder and harder to tell true from false news. Anyone with a few hundred dollars to spare has access to vast amounts of computing power. The widespread use of advanced software is democratizing the world, which is a good thing in principle, but it often makes it more complicated to distinguish truth from lies.

Because users can tailor the information they see to reinforce their preferences, it’s increasingly easy to live in an information bubble, where the only news you get is the news you want to hear. China Miéville exaggerates this tendency in The City and the City, a sci-fi-crime series about Inspector Borlot of the City of Bethel’s homicide squad investigating a murderer. He repeatedly crosses the city line to the city’s sister city, Erkoma, to work with the police there. At first, the show feels a bit like Berlin before the fall of the Berlin Wall, when the city was divided into East and West, but you slowly realize that the two parts of the city are geographically identical . The citizens of each city are trained from birth to treat the other as nothing, even as they shuttle through each other’s buildings and people. Today, many of us do the same thing on the Internet, obsessed with confirmation bias, so that all the information we receive reinforces the idea that we are right.

Why are we so easily manipulated by fake news? It’s because the good old Bayesian brain is based on concrete beliefs. Our beliefs aren’t like files in a computer that can be deleted or replaced with the click of a mouse. They’re more like hardware that’s wired together. Changing the wiring pattern is hard. The more we believe something, or even just try to believe it, the harder it is to change. Every piece of fake news we believe, strengthens those connections because it suits our needs. Every piece of news we don’t want to believe is ignored.

I don't know of any good way to avoid this. Education? What happens if a child goes to a particular school that promotes a particular belief? What happens if subjects where facts are clear but not necessarily beliefs are prohibited? Science is by far the best way of separating fact from fiction in all of human invention, but what happens if the government decides to deal with these inconvenient facts by cutting funding for research? In the United States, federal funds can no longer be used to legally study the effects of gun rights, and the Trump administration has considered doing the same with climate change.

Folks, the facts are not going away.

One suggestion is that we need new watchdogs. But a website trusted by an atheist is anathema to a true believer, and vice versa. What happens if some evil corporation takes control of the websites we trust? This is not a new question. As the Roman poet Juvenal wrote in his Satires around 100 AD, who watches the watchdogs? Who watches the watchdogs themselves? But the problem we face today is even worse, because a single tweet can reach the entire planet.

Maybe I'm too pessimistic. In general, better education makes people more rational. When humans lived in caves and jungles, the "quick and crude" survival algorithm of the Bayesian brain provided good help; but in an era full of misinformation, it may no longer be applicable.

About the Author

Ian Stewart is a professor emeritus of mathematics at the University of Warwick, UK, and a member of the Royal Society. He has won the Faraday Medal of the Royal Society, the Public Understanding of Science and Technology Award of the American Association for the Advancement of Science, and the Zeeman Medal of the London Mathematical Society and the Institute of Mathematics and Applied Sciences.

Special Tips

1. Go to the "Featured Column" at the bottom of the menu of the "Fanpu" WeChat public account to read a series of popular science articles on different topics.

2. Fanpu provides a function to search articles by month. Follow the official account and reply with the four-digit year + month, such as "1903", to get the article index for March 2019, and so on.

Copyright statement: Personal forwarding is welcome. Any form of media or organization is not allowed to reprint or excerpt without authorization. For reprint authorization, please contact the backstage of the "Fanpu" WeChat public account.

The cover image of this article is from the copyright library and is not authorized for reproduction.

<<:  Food Safety | Acidic foods cause cancer? Alkaline foods are healthier? Don’t be misled by the “acid-base theory”

>>:  Should I wipe my butt from front to back, or from back to front? After reading this, I finally stopped worrying about it.

Recommend

How much does it cost for a Yanbian merchant to create a homestay mini app?

The launch of mini programs has brought convenien...

AMD details Radeon RX 400 series naming conventions, confirms RX 490

AMD has released the first graphics card of the Po...

Is the Sun really a giant hydrogen bomb? Where does its energy come from?

When it comes to the sun, everyone is familiar wi...

Write a good product promotion plan, only these 6 tips

I have no idea how to promote a new product every...

An incomplete review of the creation of Douyin from 0 to 1

It has been exactly one month from November 6th t...

Enterprise Weibo operation methods and strategies!

When it comes to corporate Weibo operations , aft...

4 ways to improve the user operation system!

We can think of the user operation system as a bi...

Talk about the application of 23 design patterns in Android projects

Preface This article will discuss 23 design patte...

National 400 telephone number, 400 telephone national unified service hotline

The 400 number has no area code and is a national...

How does Keep create popular content? Explain clearly a set of operating models!

A few days ago, when I was watching TV, I found t...