Tsinghua University has developed the world's first heterogeneous fusion brain-like computing chip - "Tianji Chip". The "driverless bicycle" driven by this chip has appeared on the cover of the latest issue of Nature! The research was conducted by Professor Shi Luping's team at Tsinghua University's Brain-Inspired Computing Research Center, which is based in the Department of Precision Instruments. It demonstrated an autonomous bicycle powered by a new artificial intelligence chip.
The paper based on this research result, " Towards artificial general intelligence with hybrid Tianjic chip architecture," was published as the cover article in the August 1 issue of Nature, marking China's first breakthrough in Nature papers in the two major fields of chips and artificial intelligence . Tianji chip 5x5 array expansion board At present, there are two main methods for developing artificial general intelligence: one is based on neuroscience and tries to simulate the human brain; the other is guided by computer science and allows computers to run machine learning algorithms. Both have their advantages and disadvantages, and the integration of the two is currently recognized as one of the best solutions. Developing a computing platform that integrates the two will be a key to promoting integration. The new chip integrates the two technical routes. This integrated technology is expected to enhance the capabilities of various systems and promote the research and development of artificial general intelligence. This hybrid chip, named "Tianjic" , has multiple highly reconfigurable functional cores that can simultaneously support machine learning algorithms and existing brain-like computing algorithms. The researchers demonstrated the hybrid chip's processing capabilities using a self-driving bicycle system . This is a heterogeneous and scalable artificial general intelligence development demonstration platform. Using a Tianjic chip , it demonstrates the bicycle's self-balancing, dynamic perception, target detection, tracking, automatic obstacle avoidance, obstacle crossing, voice understanding, and autonomous decision-making functions. During the test, the unmanned bicycle can not only recognize voice commands and achieve self-balancing control, but also detect and track pedestrians in front and automatically avoid obstacles .
Voice control "turn left" Professor Shi Luping said that this is only a very preliminary study, but this research may promote the further development of artificial general intelligent computing platforms. Next, New Intelligence brings a detailed interpretation of this breakthrough research and an interview with Shi Luping’s team. It is generally believed that there are two ways to achieve general artificial intelligence (AGI): computer science-oriented and neuroscience-oriented . Since the two paths have fundamental differences in ideas, concepts and implementation solutions, they rely on different development platforms and are incompatible with each other, which has caused great obstacles to the development of AGI technology. At present, there is an urgent need for a general platform that supports both methods. The "Tianjic chip " developed by Shi Luping's team has achieved this and can provide a hybrid collaborative development platform for AGI technology. Tianjic chip and test board The Tianjic chip adopts a quasi-data flow control mode with a many-core architecture, reconfigurable functional core modules and a hybrid coding scheme. It can not only adapt to machine learning algorithms based on computer science, but also easily implement neural computing models inspired by brain principles and a variety of coding schemes. With just one chip, multiple algorithms and models can be processed simultaneously in the driverless bicycle system to achieve real-time target detection, tracking, voice control, obstacle avoidance and balance control . This research is expected to open up new paths for the development of more versatile hardware platforms and promote the development of AGI technology. Given the current advances in machine learning and neuroscience, an AGI system should have at least the following characteristics:
These features need to run efficiently in a universal platform, that is, to be able to support mainstream artificial neural networks (ANNs) as well as neuroscience-inspired models and algorithms in a unified framework. Figure 1: A hybrid route to AGI development To support these capabilities, the team developed a cross-paradigm computing platform that can adapt to neural networks for both computer science and neuroscience (Figure 1), and is compatible with a variety of neural models and algorithms, especially those based on biology (such as spiking neural networks, or SNNs). Generally, ANN and SNN have different modeling methods in terms of information representation, computational principles, and memory organization (as shown in Figure 2a). The biggest difference between the two is that ANN processes information with precise multi-bit values, while SNN uses binary pulse trains. The implementation comparison between ANN neurons and SNN neurons is shown in Figure 2b. On the other hand, there are some similarities between ANN and SNN neurons, which leaves room for the fusion of models. By comparing the neural network models of ANN and SNN in detail, the computational models are parsed and mapped to the relevant neuronal functional modules - axons, synapses, dendrites and cell bodies, thereby constructing a unified neuronal scheme across paradigms (as shown in Figure 2c). The team designed synapses and dendrites that are applicable to both schemes, while axons and somas change their functions through independent reconstruction. Figure 2d is a complete schematic diagram of a single functional core (FCore), including axons, synapses, dendrites, cell bodies, and routing parts. To achieve deep fusion, almost the entire FCore can be reconfigured to achieve high utilization in different modes. FCore is able to cover the linear integration and nonlinear transformation operations used by most ANNs and SNNs. The FCores on this chip are arranged in a two-dimensional 2D grid, as shown in Figures 2e and 2f. The Tianjic chip and its backend layout are shown in Figure 3a. The chip consists of 156 FCores, containing approximately 40,000 neurons and 10 million synapses. The Tianjic chip is manufactured using a 28-nanometer semiconductor process and has an area of 3.8×3.8 square millimeters. The chip area occupied by each independent module, including axons, currents, signals, routers, controllers, and other chip overheads, is shown in Figure 3b. Since resources can be reused, the area used to be compatible with SNN and ANN modes accounts for only about 3% of the total area. The power consumption breakdown of the FCore is shown in Figure 3c.
Figure 3. Schematic diagram of chip evaluation and modeling summary Tianjic can support a variety of neural network models, including networks based on neuroscience (such as SNN and biologically inspired neural networks) and networks based on computer science (such as MLP, CNN and RNN, etc.). Figure 3d shows the test results of different network models and general processing units on the Tianjic chip. As shown in Figure 3e, the hybrid neural network with dendritic relays can break through the limitations of traditional neuromorphic chips Fan in/fan out and avoid the accuracy loss of SNN networks (+11.5%) . The additional overhead added by adopting this hybrid mode is negligible because Tianjic can naturally implement heterogeneous conversion in FCore. Using Tianjic, you can also explore more biologically meaningful cognitive models (as shown in Figure 3f). Voice control, automatic obstacle avoidance, this driverless bicycle is amazing To demonstrate the feasibility of building a brain-like cross-paradigm intelligent system, the team developed a heterogeneous and scalable artificial general intelligence development demonstration platform using driverless bicycles, deploying and running multiple dedicated networks in parallel on a Tianjic chip. The bicycle in the experiment is equipped with a variety of algorithms and models that can perform tasks such as real-time object detection and tracking, voice command recognition, acceleration, deceleration, obstacle avoidance, balance control, and decision-making (Figure 4a). Self-driving bicycle demonstration platform To achieve these tasks, three main challenges need to be overcome: First, it successfully detects and smoothly tracks moving targets in an outdoor natural environment, crosses speed bumps, and automatically avoids obstacles when necessary. Second, it needs to respond in real time to balance control, voice commands, and visual perception to generate real-time motor control signals to keep the bicycle moving in the right direction. Third, realize integrated processing of multiple information and rapid decision-making. Figure 4: Test results of driverless bicycle based on Tianjic chip multi-model integration platform To accomplish these tasks, the team developed several neural networks, including CNN for image processing and object detection, CANN for human target tracking, SNN for voice command recognition, MLP for posture balance and orientation control, and a hybrid network for decision control. Due to the chip’s decentralized architecture and arbitrary routing topology, the Tianjic chip platform can parallelize the operation of all neural network models and achieve seamless communication between multiple models, enabling the bicycle to successfully complete these tasks. Figure 4c shows the output signals in response to different voice commands. Figure 4d shows the output control signals of the bicycle when tracking, avoiding obstacles and traveling in an “S-shaped” curve. Figure 4e shows the learning of vehicle posture and steering control at different speeds based on physical measurements. Tianjic chips can simultaneously support both computer science-based machine learning algorithms and neuroscience-based biological models, and can freely integrate various neural networks and hybrid coding schemes to achieve seamless communication between multiple networks, including SNN and ANN. In summary, this paper introduces a novel chip architecture for brain-like computing, which achieves flexibility and scalability by integrating cross-paradigm models and algorithms into one platform. It is hoped that this research result will accelerate the development of AGI and promote the development of new practical applications. Regarding several issues of concern in this research, Professor Shi Luping of the Department of Precision Instruments at Tsinghua University, Associate Researcher Pei Jing of the Department of Precision Instruments at Tsinghua University, and Postdoctoral Fellow Deng Lei of the University of California, Santa Barbara accepted media interviews on behalf of the research team. Q: What is the biggest challenge you encounter in your research? Shi Luping: We started this research in 2012 and encountered many challenges. However, we believe that the biggest challenge does not come from science or technology, but from the fact that the distribution of disciplines is not conducive to solving such a problem. Therefore, we believe that the deep integration of multiple disciplines is the key to solving this problem. Therefore, in this research, we formed a multidisciplinary integration team, and a brain-like computing research center composed of seven colleges and departments, covering brain science, computer, microelectronics, electronics, precision instrument, automation, materials, etc. Here, I would like to thank all the leaders of Tsinghua University for their strong support for interdisciplinary construction, which is the key to the success of this project. Deng Lei: In terms of chips, the biggest challenge is how to achieve deep and efficient integration. I would like to emphasize two points: First , it is depth and efficiency. There are two types of popular neural network models at present, one is from computer science, and the other is from brain science. The languages of these two models are very different. They have different computing principles, different signal encoding methods, and different application scenarios, so the computing architecture and storage architecture they require are very different, even the optimization goals of the design are very different. This can be seen from some deep learning accelerators and some neuromorphic chips that we can see at present. They basically have independent design systems. Therefore, it can be seen that deep fusion is not simple. It does not mean that you can design a deep learning acceleration module, then design a neuromorphic module, and then put them together. This is not feasible. It is difficult for us to determine the proportion of each part, because the applications in reality are complex and changeable, which is not efficient. Second , if we build a heterogeneous hybrid model, we may need a dedicated signal conversion unit between the two modules, which will incur a lot of additional costs. Therefore, how to design a chip architecture that is compatible with these two types of models and can be flexibly configured and have high performance is also a challenge in our chip design. Q: Why did you choose driverless bicycles as an entry point? Shi Luping: Bicycles serve our chips. After repeated in-depth discussions, we decided what kind of application platform to develop to demonstrate our new heterogeneous integration function. This was not an easy task. We had four considerations: First, we hope that this is a multimodal system that is somewhat similar to the brain, rather than some current AI algorithms that only do single applications. We hope that this is a complete link covering perception, decision-making and execution, so that it can provide support for our heterogeneous fusion of multiple models, so this is different from a single model. Second, we hope that this can also interact with the real environment, rather than just doing an experiment in a computer room or a simulation on a computer. We hope that it can interact with the real environment. Third, we hope that this system will have power consumption and real-time requirements for our processing chip, so as to reflect the advantages of our dedicated chip. Fourth, we want to make this system controllable and scalable through repeated experiments. After considering the above points, we finally chose the unmanned bicycle platform, which has the functions of voice recognition, target detection and tracking, motion control, obstacle avoidance, and autonomous decision-making. So although it looks small, it is actually a small brain-like computing platform with all the necessary functions. Q: Can brain-like systems surpass the human brain? Shi Luping: People are very interested in whether brain-like technology can surpass the human brain. In fact, this is the same as people are always asking how computers can surpass the human brain. Computers have long surpassed the human brain, it's just a matter of which aspects. The amazing abilities that we now think geniuses have are actually very easy to achieve with computers today, such as remembering quickly, remembering accurately, calculating quickly, calculating accurately, etc. These are child's play for computers. However, at present, at many levels of intelligence, especially when it comes to uncertainty issues, in many areas such as learning and autonomous decision-making, there is still a considerable gap between computers and human brains. Computers will gradually narrow the gap. As for whether they can surpass the human brain in all aspects, I personally think that there will be more and more from the technical level, because the development of computers has a characteristic that they never regress and they always move forward. But I believe that we humans are wise. We will gradually improve our understanding of the research field and control its risks in the process of development, because I believe that the reason why people pay attention to this issue is because we are worried that it will destroy humanity as described in science fiction movies. In fact, we have already created something that can destroy humanity, namely nuclear weapons. But why haven't they destroyed humanity yet? Because we have mastered it and we can control it. For things like brain-like computing, strong artificial intelligence, and artificial general intelligence, we believe that humans can make good use of our wisdom to regulate their development path, so that they can benefit us humans and avoid those risks to the greatest extent possible. The collaborating units of this article include Beijing Lingxi Technology Co., Ltd., Beijing Normal University, Singapore University of Technology and Design, and University of California, Santa Barbara. |
>>: What does a mobile phone need to do to completely replace a PC?
Advertising can also be a part of life”, this was...
As 2016 draws to a close, mobile videos represent...
First of all, we need to understand bidding. Why ...
For search engines, it is not recommended to chan...
When we are in the wild, we always feel the call ...
Talking about data analysis theory alone is too d...
In 2015, with the continuous influx of games, Chi...
If there is one thing in this world that is essen...
Compared with text, short videos are more intuiti...
[51CTO.com original article] "Cloud computin...
Preface: Technology is developing rapidly, and th...
Lei Jun, also known as "Lei Busi", has ...
A bird that lived in Fujian 150 million years ago...
At an accidental Shanghai (Smalltalk) operations ...
Expert of this article: Jiang Chenkun, School of ...