Today we are not going to answer the question of what is the principle behind intelligence. What are the mysteries of the brain? We must first look at what kind of structure produces intelligence, Then make such machines to achieve intelligence, Then study the mystery of its origin. Huang Tiejun, Director of Beijing Zhiyuan Artificial Intelligence Research Institute Science experts talk about science | Beijing Hello everyone, the title of my speech today is "The Rise of Machines, Unlimited Intelligence". I prefer the term "machine intelligence" to "artificial intelligence" because artificial intelligence always makes people think that humans design intelligence, but this is not the case. As a carrier of intelligence, machines themselves will continue to develop and evolve. When they develop, they will drive the continuous development of intelligence. The endless frontier of technology and science What is intelligence? This is a very basic concept that is difficult to define. You can also define intelligence as perception or cognition, but the definitions of intelligence are almost endless. My definition of intelligence is this: Intelligence is an ability that a system acquires by acquiring and processing information, allowing the system to evolve from simple to complex. What is special about my definition? First, it shows that intelligence must be a function that appears on a certain system, and it must have a physical system as its carrier. Second, intelligence is a capability that the system acquires through acquiring and processing information. People can obtain energy from eating to make their bodies stronger, but this is kinetic energy, not intelligence. Intelligence can only develop through acquiring information, such as reading books, observing the world, acquiring information with ears, interacting with the world, etc., which may have an effect on the development of our intelligence. With this definition, it is easy to distinguish between biological intelligence and machine intelligence. From the functional or phenomenal level, they can be similar or completely different. The reason why they are easy to distinguish is that their physical carriers are different. The carrier of biological intelligence, including humans, is organic organisms; while the carrier of machine intelligence is various non-biological machines, including computers. Another difference is that biological intelligence belongs to the field of life science. The object of life science research is life, which is part of natural science. Physics, chemistry, biology, astronomy, geography... any natural science has a specific and clear research object. Life, especially the brain of life, is the most complex objective object in the universe we know so far. Therefore, some people call brain science, neuroscience, cognitive science, a discipline about biological intelligence, the final frontier of natural science. If we understand it clearly, all the problems of natural science can be solved. In contrast, machine intelligence is carried by machines, and the machines themselves are constantly evolving. In the beginning, humans designed machines, and as machines became more and more complex, their intelligence became more and more powerful. In the future, machines may design themselves, so machines themselves will continue to evolve iteratively. Therefore, the functions of machine intelligence will become more and more numerous and more powerful. So where is its limit? Biological intelligence is constantly evolving, but the speed of evolution is relatively slow, and it has a limit; however, the speed of machine evolution will be extremely fast in the future, and its intelligence is endless, so machine intelligence is the endless frontier of technological science. From accepting rules to autonomous learning Completed: 10% ////////// When it comes to artificial intelligence, people may like to say that it is intelligence achieved by writing programs and algorithms on computers. This is actually a narrow understanding of machine intelligence, and they are just one way to achieve machine intelligence. ▲ Narrow machine intelligence: artificial intelligence based on computers According to this view, the development history of machine intelligence over the past 60 years can be roughly divided into three stages. The first stage was probably from the 1950s to the early 1970s. The basic idea at that time was to give machines some rules such as logic and reasoning, which is what we usually call programming and writing algorithms, and then let the machines execute them. This is of course the realization of intelligence. But it is obvious that humans are designers and machines are just executors. After about 20 years of development, this school of thought has proven that many problems cannot be solved. The second stage was in the 1970s and 1980s, when a set of methods such as expert systems and knowledge engineering were developed. This set of methods proposed that not only should the machine be instilled with rules, but also with knowledge such as "Beijing is the capital of China". Therefore, a large number of knowledge bases and expert systems were designed at that time. But people later discovered that this was still problematic, because not all the knowledge in the world can be turned into items or symbols in books, and a large amount of content fed into the machine cannot be described. Let me give you an example. When I say the word “red”, you will have a clear feeling in your mind, but this feeling cannot be described as so-called knowledge or symbols. The third stage is from the 1980s to the present, which is called the stage of "learning from data". That is, instead of relying on people to compile rules and knowledge and then let machines execute them, machines and computers are allowed to find rules and patterns directly from data, which is the era of machine learning. So when it comes to narrow artificial intelligence, it can be roughly divided into these three stages. How do machines gain intelligence? Completed: 20% ////////// The narrow sense artificial intelligence mentioned above is generally a way of thinking of the symbolism school, that is, using symbols to represent all aspects of intelligence, and then using machines to execute it. In fact, there are two other very important academic schools. The second school is called connectionism, also known as neural network. The third is called behaviorism, also known as the method of cybernetics. If we understand the differences between these three schools or three kinds of thought in a figurative way, symbolism says that machines should be able to think; connectionism says that in order to realize intelligence, there must be a physical carrier such as a brain or a nervous system, so a brain must be created; and behaviorism believes that if there is only a brain but no body, it cannot interact with the environment, nor can it form and develop intelligence. The first school of thought, symbolism, is to describe the functions and phenomena of intelligence with symbols. In fact, a lot of the knowledge we learn in class has been turned into symbols by teachers or authors, and we accept and learn it. Remembering a law or a rule of deduction is symbolism in a broad sense. Applying the idea of symbolism to intelligence means turning these symbols into codes, programs, and algorithms for computers to execute. Symbolism has achieved many results. Let me give you two representative examples. One is that before the concept of artificial intelligence came into being, there was a software algorithm system called "Logic Theorist", which was able to prove many theorems in mathematics and was very famous at the time. It was also the only artificial intelligence system that could run when the concept of artificial intelligence came into being in 1956. ▲ Wu Wenjun, a famous mathematician and academician of the Chinese Academy of Sciences Another milestone achievement was created by Chinese scientist Mr. Wu Wenjun. In 1977, he proposed the Wu method, which states that all theorems that can be proved by machines can be proved by this method. However, this statement has a premise, that is, "theorems that can be proved by machines." In fact, there are a large number of theorems that cannot be proved by machines and unprovable theorems. The second school of thought is behaviorism, which also predates the concept of artificial intelligence. ▲ Walter's mechanical turtle @1948 For example, in 1948, an inventor invented a machine that looks like a little turtle in the picture above. At that time, this machine used a light sensor and an analog circuit, and could detect obstacles with its light sensor. When it encountered an obstacle, it could move left and right to find a way to go. In addition, it has a circuit inside to simulate the conditioned reflex of the nervous system, so that when the power is running out, it can return to the socket to recharge. We are all familiar with this thing. Many families have it now. It is a sweeping robot. It was invented longer than the history of artificial intelligence. Of course, there were no computers or chips, programs or algorithms in such a machine at that time. It just had an analog circuit inside, and gained its intelligence through behavior and interaction with the environment. In recent years, we often see some very cool robots, such as the robots made by Boston Dynamics. They can move in very complex environments. For example, the robot in the picture above can jump onto a high platform. The design concept behind this type of robot is also mainly behaviorism. These behaviors are not programmed by humans, who set rules such as "first step with the left foot, then step with the right foot, and then behave in a certain way when encountering a certain height of object", and then let the robot try and execute them. They are trained. Since there is training, there will be failures, and what we see are all successful examples. In order to complete this action, it has actually trained for a long time and broke its legs countless times. The third school is connectionism. As mentioned above, the realization of intelligence requires a physical carrier, which itself is an objective physical existence. According to the idea of connectionism, it should be a neural network. Because the main carrier of biological or human intelligence is our nervous system with the brain as the center, when we construct a machine intelligence system, we should also construct an artificial neural network. The construction of this artificial neural network has long been the object of experimentation for various inventors. In fact, in the past few years, our understanding of the biological nervous system is still very limited. We should learn from the biological nervous system to construct artificial neural networks, but when we can't get the blueprint of the biological nervous system, we can only create various neural networks. The figure above shows how neurons in the many neural networks that have been invented are connected into various structures. The purpose of constructing a neural network is to generate intelligence. So what kind of neural network can generate intelligence? How to generate intelligence? These are the problems that the connectionist school of thought has to solve. Machines cannot escape the law of survival of the fittest Completed: 30% ////////// Around the 1980s, several research groups proposed a similar idea, which we call the backpropagation algorithm today. ▲ Back propagation algorithm on multi-layer neural network (around 1985) As you can see, the neural network above is actually a very simple network. It is simple because it has multiple layers. Each vertical column is a layer of neurons. The neurons between two layers are fully connected, that is, any neuron in the upper layer is connected to all neurons in the lower layer. This kind of network structure is easy to imagine and is very simple. Biological neural networks are much more complex than this, but it is still far from clear, so now humans can only design such simple artificial neural networks according to their own understanding. The above network is still widely used today. How can such a simple structure generate intelligence? At the beginning of this network, all neural connections are random. Its left side is input and its right side is output. If you want it to complete face recognition now, you input an image of a face on the left side and want it to output "this is Zhang San or Li Si" on the right side. If you randomly input an image like this and ask it to output a person's name, it will not be able to output correctly without any teaching process. For example, we hope that when T1 is equal to 1, it is Zhang San, and when T2 is equal to 1, it is Li Si. However, when we input Zhang San's image, it is almost certain that T1 is not equal to 1. It doesn't matter if it is not equal to 1. Since we hope that T1 is equal to 1, how big is the difference between it and 1? This difference can be represented by Δ. In order to make the value of T1 equal to 1, the algorithm will adjust the neural network layer by layer in reverse, so that the strength value of each layer of connection changes. Therefore, a round of adjustment is required after each training. After repeated training and adjustment, the machine can make recognition. I have worked on this kind of neural network. Even for simple problems, it would be good if I could train it in three days and three nights to get a decent result. Therefore, there are no rules behind it on how to recognize faces. Its function is gradually acquired by a network after many rounds of adjustments. The same idea is basically behind today’s neural network training. The results we get are actually obtained after repeated attempts and the survival of the fittest. Nature also trains our brains in this way. In the billions of years of life development, countless lives have died because of trial and error. ▲ In 2006, Geoffrey Hinton published a paper on deep neural networks in Science In 2006, the famous scientist Geoffrey Hinton proposed an improved algorithm: after making some methodological improvements on the neural network mentioned above, it can achieve good recognition results. We call it deep learning. What is deep learning? Depth refers to the number of layers in a neural network. The example I just gave only had a few layers, but today's neural networks have hundreds or even thousands of layers. Learning is machine learning, which means trying over and over again and constantly adjusting parameters to eventually get an answer close to what humans want. Therefore, deep learning is the process of obtaining regularities through repeated attempts on multi-layer neural networks. This is a universal method that can solve many problems: if you input a face, it can finally identify Zhang San or Li Si, which is face recognition; if you input a sentence, it can finally understand each word one by one, which is speech recognition. Other data types of different media can also find the structure behind the data in this way. Being able to discover patterns and structures behind complex phenomena is the basic characteristic of intelligence. Deep networks can do this: no matter what type of data it is, as long as it has structure inside, deep networks can find the structure through multiple attempts. This is the basic method of deep learning. How can machines defeat human masters? Completed: 40% ////////// An example of deep learning is AlphaGo. In 2016, the AlphaGo Go system defeated Lee Sedol. How did it do that? Before the game, many people denied this possibility, believing that computers could not defeat humans. This was because the idea behind their conclusion was symbolism, which was to search for ways to play Go by giving the machine rules. ▲ AlphaGo gains "chess sense" through visual perception Computers are dumb, but they calculate quickly. Calculating quickly does not solve the problem, because there are too many possibilities for Go, so many that today's computers cannot achieve the complex calculation requirements of finding all the moves no matter how many tens of thousands of years they use. However, humans cannot play all the ways of Go and then summarize their experience. So how do humans summarize the rules in a limited number of games? This is what AlphaGo learned. It sees the chessboard as an image. The chessboard is actually not very big, with only 361 points, and each point has only three states: black, white, and none, so it is a very simple image, much simpler than a human face. The machine can see a large number of images, because some images eventually lead to winning, and some images lead to losing, and this result is very clear. So you can input these images to the machine and tell it: "When this situation appears, the probability of winning is a little higher, and when that image appears, the probability of winning is a little lower." Although the probability of winning and losing is only slightly different, you can gradually grasp the rules after learning more. So when AlphaGo plays Go, its neural network learns by looking at the Go situation and finding the regularities in it. This process is the same as how humans find patterns in complex data and things. The chess sense that AlphaGo learned is the same as that of humans. There is no difference between what a machine does and what a biological being does. But the important thing is that the machine has a strong computing power, and it can see more chess surfaces and chess games. For example, AlphaGo played a total of 30 million games against itself. If a human is 100 years old, the total life span is 36,500 days, and 30 million games is equivalent to playing about 800 games of Go every day. Humans cannot play 800 games of chess every day from birth to the age of 100, but machines can do it in a few months. The chess sense it obtains is much richer than the data source obtained by humans, so it has found many tricks that no one has tried before, and it is an obvious result that it beats humans. ▲ In January 2019, DeepMind AlphaStar won all the games against two professional players, one of whom was Mana, a top Protoss player ranked in the top 10 in the world. In addition to playing chess, machines can do many things, such as playing games. In games like StarCraft, it is also very clear who wins and who loses. Although there are many scenarios or choices in the game, in general it is a scenario with clear rules. The machine can also find more advantageous strategies from the role space and then continuously improve its ability. Using a similar method, the machine defeated the top human players in 2019. ▲ On June 21, 2020, Qiyuan AI "Star Commander" defeated the national champion of "StarCraft I/II" Huang Huiming (TooDming) and the three-time champion of the Gold Finals Li Peinan (Time) with two 2:0 scores. This is a competition held in Beijing in June 2020, where two top Chinese StarCraft players competed against a machine. The computing power of the machine used by this system is only one-tenth of that of similar foreign systems, but through improved algorithms, it has reached a level comparable to foreign systems. I was there that day, and it defeated the two top human players with almost no suspense. Why do we need to imitate the brain to make intelligent machines? Completed: 50% ////////// From this perspective, machine intelligence seems to be developing very fast and is already somewhat unstoppable. Machines can play chess, StarCraft, and even make more complex decisions. Will machines surpass humans if this trend continues? Will artificial intelligence have a bright future as computing power, big data, and other models become larger and larger? Actually, not really. For example, face recognition is the most successful application of artificial intelligence. Many companies have products that do face recognition, and we often see face recognition in our daily lives. So have today's artificial intelligence systems solved the face recognition problem? Humans have limited ability to recognize faces. On average, a person can distinguish about 2,000 types of faces. Fortunately, we don't need to know that many people in our lifetime, so distinguishing 2,000 faces is enough for us. In terms of face recognition, for example, if we input a photo and ask, where has this person appeared in this city with a population of tens of millions, machines are indeed better than humans. However, seemingly powerful artificial intelligence systems are still far behind humans when it comes to performing some basic tasks. For example, the person wearing striped clothes in the picture above was detected by the machine, but the machine could not detect that the person with the picture taped on his belly was a person. If the picture wasn't pasted on the lady's head, it would be easy to detect and identify who she is, but once the picture is added, the machine has no idea that there is a person or a face in the image. So although the machine seems very powerful, it actually has great weaknesses, and there is still a big gap between it and human vision. Why does this happen? The reason is actually very simple. Because any intelligence has a carrier. Deep learning relies on artificial neural networks, and biological intelligence relies on biological neural networks. Today's artificial neural networks are still dwarfed by biological neural networks. ▲ Human visual system As shown in this picture, our visual system is at the back of the head, and the signals from the eyes are sent to the back of the head through the optic nerve fibers. This visual system occupies about one-fifth of the area of the cerebral cortex, and the complexity of its neural network far exceeds that of the artificial neural network of all face recognition systems today. Therefore, it is not surprising that biological vision has a strong physical basis and strong capabilities. If you want to make a visual system that can rival human vision, you have to make an artificial neural network that is comparable to the neural network of human vision. This idea is not new. People have discussed it before the concept of artificial intelligence came into being. Let me give you two examples. ▲ Von Neumann The computers we are talking about today are called von Neumann computers because von Neumann defined the computer architecture. In the era when he proposed computer architecture, he also put forward a point of view. He believed that the simplest complete model of the biological visual system is the visual system itself. If you want to simplify it, it will only make things more complicated, not simpler. Therefore, to achieve a certain kind of intelligence, there must be a machine and a corresponding structure to achieve that kind of intelligence. ▲ Turing Turing is a more famous scholar and computer expert. The basic model of our computers today was proposed by Turing. Turing published a paper in 1950, which is also considered the first paper on artificial intelligence. This is because the concept of artificial intelligence appeared in 1956, and this paper was published in 1950. In this paper, Turing made a very clear judgment. He believed that a truly intelligent machine must have the ability to learn. The way to make such a machine is to first make a machine that simulates a childhood brain and then educate and train it. Only by training with a machine that can simulate a childhood brain can the expected intelligence be produced. Why do we need brain simulation? Why must we do it based on the brain? Humans can also try to make various neural networks and then solve some practical problems, but in the ultimate sense, the most economical way is to use the human brain as a gourd to copy the gourd, and this gourd is machine intelligence. Why use the biological brain as a "gourd"? Because it is a structure that has been proven to be effective after 3.5 billion years of evolution and trial and error. Today, we just need to figure out this structure and use it to develop machine intelligence. The human brain is very powerful in many ways. Although it consumes only 25 watts of energy, it can do much more than our current mainframe computers. The cost of the evolution of the human brain has already been paid by the Earth, so it is a low-cost, ready-made reference. First build the brain, then study the secrets of intelligence Completed: 60% ////////// There may be a problem here. Many experts say: "It's easy to say, but difficult to do. Do you know the mechanism behind the brain?" As I just said, the brain is the last frontier of human natural science. When will the secrets of the brain be solved? It is indeed unpredictable, and it is hard to say whether it will take hundreds or thousands of years. But the four words "machine intelligence" can be divided into "machine" and "intelligence". When we do machine intelligence, the first step is to make a machine, and the second step is to make intelligence. We should not just focus on intelligence, but first make a machine that can produce powerful intelligence. We should first care about the machine, the brain, and the structure that produces powerful intelligence, rather than worrying about the brain's thinking and the mechanism of intelligence. How does it produce intelligence? That's a matter for later. To avoid making it seem too abstract, let me give you an example. If you think that humans need to understand the principles of flight before they can build a flying machine, humans would not be able to fly to this day. This is because humans have not yet understood all the principles of flight. In fact, the historical development process of the invention of the airplane is as follows: In 1903, the Wright brothers invented the airplane, and there was no flight principle at that time. It was not until 1939 that Qian Xuesen and von Karman truly established a set of flight theories. During this 30-plus years, there were two world wars, and airplanes had already played a great role. So what happened in between? In fact, the Wright brothers invented the airplane by taking advantage of the technological progress of the industrial age, relying on power and trying to make a mechanical device fly into the sky. The Wright brothers and the world did not understand why it could fly into the sky, but it happened. There was a man named von Karman who firmly refused to believe this. In 1908, he bet with others that "it is impossible for humans to make a huge device fly in the sky and still have people stay on it." It was not until he saw this fact with his own eyes in Paris that he decided to study why such a heavy device could fly into the sky. It took him more than 30 years to find the principle of flight and aerodynamics. After 1939, based on the principles of aerodynamics, we can build better, bigger and more powerful airplanes. However, the first airplane was not built based on principles, but was a major invention of mankind. There are many such events in the history of science and technology. For example, the compass was invented in the Song Dynasty of China. At that time, we did not know about electromagnetism, but this did not hinder the invention of the compass. Without the compass, there would be no great voyages, and it is hard to say whether we would have the progress of science and technology today. So don’t be confused by principles and think that you must understand the principles before doing something. This is a big ideological obstacle that hinders innovation. Intelligence is actually the same. Today we are not going to answer what the principle behind intelligence is or what the secret of the brain is. We need to first see what kind of structure produces intelligence, then build such a machine to realize intelligence, and then study the secret of its production. This is a precious photo of von Karman's teacher, von Karman, and his student Qian Xuesen. In his later years, von Karman once summed up a sentence: "Scientists discover the existing world, and engineers create the future world." The object of a scientist's research must exist; researching an object that does not exist is metaphysics. What engineers create can be based on scientific principles, but the greatest engineers are those who create something without any principles, and this is a major invention from 0 to 1. Discovery and invention are equally important. Sometimes discovery guides invention, and sometimes invention comes before scientific discovery. What machine intelligence needs to do now is to discover the structure of biological nervous systems, not to discover the secrets of biological intelligence. So, what kind of structure behind biological intelligence do we want to discover? There are only a few common model animals used in scientific research. The simplest one is the nematode, which has only 302 neurons. It relies on more than 300 neurons to survive, reproduce, sense and move. A slightly more complex organism than nematodes is the zebrafish, which is also commonly used today. Zebrafish have only tens of thousands of neurons when they are born, but as they grow, their neurons continue to increase, up to millions. Zebrafish are transparent, so the behavior of these neurons can be carefully observed using photoelectric instruments. The more complex ones are fruit flies, and then the mammal mouse. The more complex ones are marmosets, which have the smallest brain among primates, with about 1 billion neurons in their brain. The most complex organism is humans, whose brains have about 80 billion neurons, which is almost 100 billion. Different biological neural networks have different levels of complexity, resulting in a variety of intelligent behaviors. When will we be able to analyze the biological brain and use it as a blueprint for creating machine intelligence? People have different opinions on this. I would like to quote the viewpoint put forward at the Global Brain Project Symposium in April 2016, which said at the time: Within 10 years, we hope to be able to complete the analysis of the brains of animals including but not limited to: fruit flies, zebrafish, mice and marmosets. In other words, in the next 10 years or so, the analysis of biological brains will enter the primate category. When will it reach humans? 20 years? 30 years? It’s hard to give an answer now. But in general, all we need is to use the most advanced technology to parse a complex object. There is no question of whether we can do it, only whether the technical means are sufficient. Although it is difficult to say the exact time, it is probably within a few decades that this thing can be done. Just like the cost of testing the human genome was very high at the beginning, but today it only costs a few hundred yuan. Technological advances will bring about great progress in the analysis of brain structure. Towards general artificial intelligence Completed: 70% ////////// Since biologists can analyze the structure of the biological nervous system, the problem faced by people working in information and artificial intelligence is whether they can follow suit and construct an electronic brain, that is, create an intelligent machine. In fact, the world is making rapid progress in this regard. In 2019, my country started construction of a major national scientific and technological facility, the "Multimodal Cross-Scale Biomedical Imaging Facility", in Huairou, Beijing. The main goal of this facility is to analyze the neural network structure of the brain. The FAST astronomical telescope in Guizhou looks at our big universe, while this system looks at our small universe of living things. Even if it is not a human brain, if we can make the hundreds of thousands of neurons in the fruit fly brain, it is actually very useful. Today's drones look very powerful, but they are still far behind the fruit fly. If we can construct a fruit fly brain, the corresponding electronic devices can actually meet many practical problems. Regarding driverless cars, some people say that it will be successful in a few years, while others say that it may not be successful for decades. The main issue here is whether the driving brain can sensitively perceive the environment. If we can make a biological brain, a mouse’s ability to perceive complex space is much stronger than today’s driverless cars. If we can make a mouse brain, it will be enough to complete the function of driverless driving. Therefore, if we can simulate these brains with high precision and then train the intelligence above, we can solve many problems of artificial intelligence. And the step-by-step solution of these problems will bring about historic progress towards general artificial intelligence. All artificial intelligence today is weak artificial intelligence. Weak artificial intelligence is also called narrow artificial intelligence or special artificial intelligence. This intelligent system can only do one thing. An intelligent system that can do anything is called a general artificial intelligence system, also called a strong artificial intelligence system. Humans are of course strong artificial intelligence systems. We know more than just one thing. As long as we learn, we can learn various ways to solve problems. Our future goal is to create a general artificial intelligence system. When this system can be built is also a very controversial topic, and different people have very different opinions. In January 2015, there was a conference on AI safety called AI Safety. At the conference, an on-site survey was conducted among the experts present, asking them to predict when general AI or strong AI will appear. Put their answers in order. Some say 10, 20, 30, 50 years, and some say it will never happen. The midpoint of the prediction is 2045, which means 30 years after 2015, when this kind of intelligence will be created. I did not attend that meeting, but my assessment is similar. Before that conference, I published an article titled "Can Humans Create a Super Brain?" The original title of the article was "Creating a Super Brain." The editor was afraid that it might be too absolute, so he used a question. The article talks about how we can construct an electronic brain to produce strong artificial intelligence. I also do this kind of work when I am doing scientific research. Electronic eyes 1,000 times faster than human eyes Completed: 80% ////////// Just now I said that it would take decades to replicate a brain, so what can we do in a few years? We can only replicate part of the brain, specifically the eyes, the visual part. There is a complex neural network in our eyeballs, with more than 100 million sensory neurons in the eyeball, of which about 6 million are responsible for fine vision, which is the part of the neurons used when we stare at something, called the macula or fovea. We simulate this area. To simulate this area, we need to understand what is going on in it, so we need to analyze the structure and function of each neuron in it in detail, and then model it on the computer to reproduce it. The picture above shows the structure of one of the ganglion cells and the process of it sending out nerve impulses when it receives signal stimulation. My research group analyzes, models, and reproduces neurons one by one, and often a doctoral student spends several years making one neuron. There are about 60 to 70 types of neurons in the human eye, and we have made about 10 types of neurons in the fovea. We connect these neurons together according to the biological structure and let them sense light stimulation. We can see the process of layer-by-layer neuronal transmission. The picture above shows that after the neurons are stimulated by an image, they emit yellow light, that is, they emit nerve pulses, and then interact with each other to represent the process of visual stimulation. In addition to understanding the details of the biological nervous system, we also have to create an electronic system to reproduce the functions of the organism, so we designed a chip. This chip has been simplified a lot. In fact, all the calculations behind the animation above that shows all the details of the organism were completed on Tianhe-2. The chip we made only simulated one of the core functions. After such a chip is made, we can try to see what this electronic eye can see. In fact, the functions it realizes are not more complicated than those of human eyes. It is just a subset of the functions of human eyes, but there is one difference: it is an electronic system. The nervous system of organisms is very complex, but organisms are slow systems. The number of nerve impulses that can be transmitted between any two neurons in an organism per second is usually only a few to dozens, and it is impossible to exceed 100. However, the signals transmitted between the electronic neurons we constructed can be much faster. Our first version of the chip achieved 40,000 pulses per second. Assuming that the average biological pulse is 40, the difference is 1,000 times. This 1,000-fold speed difference means that this electronic eye can see high-speed motion. For example, when a fan is spinning, the human eye cannot see the details on the fan. If you put a few letters on the fan blades, you can see them clearly when the fan is not spinning, but you can't see them clearly when the fan is spinning. Why? Because the biological eye is a slow system. Each neuron can only send a few pulses to the brain per second, so of course it cannot see the high-speed process of the fan rotating dozens of times per second. The electronic eye is 1,000 times faster than the human eye. To it, the so-called high speed is just a very slow running process, so this electronic eye can see all the details when the fan turns. ▲ Bionic eye high-speed camera and recognition system In order to prove that the electronic eye can see, we bought a laser and made a device, which is the square structure in the picture above. What can this laser do? ▲ Bionic eye high-speed camera and recognition system This is a scene slowed down 1000 times. The real signal is played in real time and no one can see it clearly. After the fan starts to spin, our algorithm is executed on a computer. You can choose any letter. The PKU above is the abbreviation of Peking University. If you want the laser to hit a certain letter, such as "K", after you press the K key, the laser will directly hit a series of laser dots on the fan blade with the "K" attached, which proves that the electronic eye can see in real time. The cost of this system is actually very low. It is composed of a camera and a laptop designed by ourselves. In the future, the laptop will become a chip, a small device system that can achieve ultra-high speed. This is the benefit brought by biomimetic. If you add a camera to a traditional computer, this high-speed process cannot be done without a huge cabinet. Is the phenomenon of man vs. machine possible? Completed: 90% ////////// We tend to imagine machines and robots to be like humans, which is a misconception. For example, many Hollywood movies often feature robots fighting human heroes, which overestimates humans. The eyes of a machine are 1,000 times faster than those of humans, or even 10,000 times faster. Its mechanical movements are also many times more advanced than those of humans. The phenomenon of machines and humans fighting each other will not occur. If you give a machine a bullet, it can easily catch it. There is no need for it to fight with you. Before you raise your gun, the robot can knock you to the ground with just a slap. So this is humans projecting their own functions and performance onto machines, but it is not actually the case. As machines develop, their performance will far exceed ours. Tesla said in 1896: "I can think of no greater shock to the human mind than that of an inventor seeing an artificial brain become a reality." If the brain is realized artificially, electronically, or photoelectrically, the world will undergo earth-shaking changes. How humans will coexist with this increasingly powerful intelligence in the future is a particularly challenging problem. On the one hand, this super-intelligent neural network inherits the structure of the human brain, so it is compatible with us. Although its thinking is much faster than ours, it is just an electronic brain. Compared with aliens, we at least have the possibility of communication with super-intelligence. The powerful machines we create are in a sense our descendants, the descendants of mankind. On the other hand, superintelligence is many times faster than us, and we simply cannot keep up. Musk is working on brain-computer interfaces, thinking that "humans can't catch up with machines, so why not just connect the biological nervous system to the machine to improve together?" This sounds like a good idea, but it's like the relationship between a car and a horse-drawn carriage: a car is ten times faster than a horse-drawn carriage, but a car can't pull a horse-drawn carriage. They are not synchronized at all and can't work together. Therefore, once machines that surpass humans appear, we will face enormous challenges, and we must think about how to coexist and develop. Of course, many people are also thinking about this issue. In 2015, the AI Safety conference predicted that such a machine would appear in 2045. In 2019, another conference called Beneficial AGI was held, which means "general artificial intelligence for good." The theme of the conference was to hope that after this kind of intelligence appears, it can coexist peacefully with humans. I also attended this conference. In 2020, Professor Russell, one of the participants in this conference, gave me a preprint of his soon-to-be-published book "The Birth of AI", which proposes a way for humans to coexist with machines that far surpass humans in performance. This is just one method he proposed. This is an open question that humans should really think about. Perhaps in the next few decades, this will be the biggest challenge for humans. I hope everyone can pay attention and think about it. Thanks! - END - The articles and speeches only represent the author’s views and do not represent the position of the Gezhi Lundao Forum. Formerly known as "SELF Gezhi Lundao", it is a scientific and cultural forum launched by the Chinese Academy of Sciences, co-sponsored by the Computer Network Information Center of the Chinese Academy of Sciences and the Bureau of Science Communication of the Chinese Academy of Sciences, and hosted by China Science Popularization Expo. It is committed to the cross-border dissemination of extraordinary ideas, aiming to explore the development of science and technology, education, life, and the future in the spirit of "investigating things to gain knowledge". Welcome to follow the official website of Gezhi Lundao: self.org.cn, WeChat public account: SELFtalks, Weibo: Gezhi Lundao Forum. |
<<: Ledgers, Hounds, and Boars
>>: Tsavorite: The new noble green gemstone
Recently, one of the team's stage goals for p...
Feynman has many interesting and thought-provokin...
Preface If there is any company in China's cu...
Before talking about the method of “old employees...
The US plan to return to the moon has been a hot ...
Please contact the original author if you wish to...
In the past two days, I participated in the Pindu...
It has been just one year since I entered the ope...
As an electronic consumer brand dedicated to maki...
Training course video lecture content introductio...
Course Catalog of Fan Gongzi's "White, R...
On August 22, 2017, the "2017 Mid-term OTT O...
Apple recently released the official version of i...
The company's product line expanded and multi...