For the first time, a drone successfully defeated a human opponent in a one-on-one championship match. Behind the drone was an artificial intelligence system called Swift. The cover of the journal Nature also published the related paper on the cover of the same issue. How does AI become a game master? In games like chess, StarCraft, Dota2 and GT racing, if you play against a computer, how do those computer virtual players complete a series of operations? Maybe you haven’t heard of the deep reinforcement learning (RL) system, but you must have heard of or come into contact with it. Maybe you haven’t heard of the deep reinforcement learning (RL) system, because these computer virtual players use this technology. In simulations and board game environments, AI can easily outperform humans, but in competitions in the physical world, AI's decision-making and operations face many difficulties. First-person perspective (FPV) drone racing is a competition in which professional players fly high-speed drones on a 3D track. The pilot can observe the environment from the drone's perspective through the images transmitted by the onboard camera, and complete operations such as acceleration, deceleration, and turning to allow the drone to pass through obstacles on the track. Swift (blue) and human (red) battle, seven square gates, each must pass through one by one, Image source: References It’s difficult for autonomous drones to fly at the level of professional pilots because the robots need to fly within their physical limitations while only being able to estimate their speed and position based on onboard sensors. Traditional drone racing methods include trajectory planning and model predictive control (MPC), but this method can only be implemented under ideal conditions. Once any interference occurs, the entire system will collapse. Image source: piqsels The Swift system overcomes this difficulty. The Swift system consists of two key modules: The first is the perception system, which converts high-dimensional vision (i.e., spatial stereoscopic vision) and inertial information into low-dimensional codes; The second is the control system, which ingests the low-dimensional code generated by the perception system and generates control commands. Combining these two systems can make real-time decision adjustments based on subtle changes in the physical environment. Of course, advanced perception and control systems are not enough to compete against human champion drivers. How is the Swift system better than humans? The Swift system has certain structural advantages over human drivers. Swift system, Image source: References First, it can utilize inertial data from an onboard inertial measurement unit. This is similar to the human vestibular system, which human pilots cannot use during racing because they are not actually in the aircraft and cannot feel the accelerations acting on it. Second, the Swift system benefits from lower sensorimotor latency (40 milliseconds for Swift, compared to an average of 220 milliseconds for human experts). FPV competitions use quadcopters, which are among the most agile machines ever made. In competitions, aircraft will exert forces five times or more of their own weight, and even in a limited space, speeds can exceed 100 km/h, and acceleration is several times that of gravity. Therefore, lower latency helps make the aircraft more agile. In the actual competition process, human pilots practiced on the track for a week. After that, drones controlled by Swift and humans needed to pass through each gate in the correct order on the field track. Swift won all the head-to-head competitions with three human champions and even set a record for the fastest completion of the race. Image source: piqsels After AI-controlled drones defeated humans, autonomous mobile robots still have a lot of room for improvement. For example, when a human is controlling a drone, even if a collision occurs, as long as the hardware is still working properly, the human can still control the drone to continue flying and complete the track, but Swift has not received training in post-collision recovery. Even with many limitations, this research achievement has become a milestone in mobile robotics and machine intelligence, which will help the rapid development of autonomous ground vehicles, aerial vehicles and personal robots. References Original paper: Kaufmann, E., Bauersfeld, L., Loquercio, A. et al. Champion-level drone racing using deep reinforcement learning. Nature 620, 982–987 (2023). https://doi.org/10.1038/s41586-023-06419-4 Planning and production Source: Voice of the Science and Technology Association Author: SamKakeru Popular Science Writer Editor: Yang Yaping and Jin Yufen |
<<: The dishwashing tool we used as children can also power LEDs? | Expo Daily
Beijing's "Zhongguancun Startup Street&q...
Wang Huan's "Hand in Hand Teach You How ...
Some people say that online education is the most...
The name LineageOS may be unfamiliar to everyone,...
Soul is a new generation social APP whose audienc...
In the definition of a service platform, as the p...
In the past two days, I participated in the Pindu...
To feel the beautiful music in the world, you nee...
1. What exactly is a smart home? The concept of s...
Tik Tok is a short video software used by many yo...
Beijing will host the college entrance examinatio...
Today I would like to share with you some optimiz...
Checking on someone? Looking for someone due to a...
There have been many tips for sobering up circula...
Many mobile phone users have found that after usi...