Nowadays, unmanned driving technology is usually divided into three parts: environmental perception and positioning, decision planning, and motion control. Environmental perception and positioning are responsible for determining where there are cars or pedestrians around the car, whether the light ahead is red or green, that is, determining the status of the environment and the car. Decision planning is responsible for what the car should do, whether to follow or go around, to speed up or slow down, and what route to take that is safe, efficient and relatively comfortable. Motion control requires electronic transformation of the actuators of traditional cars. After the decision instructions and trajectory are given to the controller, the actuators such as motors, steering, and brakes must keep up with the planned trajectory as quickly and as little as possible, just like a strong body can complete the instructions of the brain. Environmental perception is the "eyes" of autonomous driving In driverless cars, sensors form a perception module that replaces the driver's sensory organs, quickly and accurately obtaining environmental status information including the distance to obstacles, the indication of the traffic light ahead, the numbers on the speed limit sign, and the vehicle's position, speed and other vehicle status, which is the guarantee for the vehicle's safe driving. Commonly used sensors for detecting environmental status include cameras, laser radars, millimeter-wave radars, ultrasonic sensors, etc., and sensors for determining the vehicle's status include GPS/inertial navigation, wheel speed sensors, etc. Self-driving cars require multiple sensors to work together The camera can classify obstacles according to the characteristics of the object. If the depth information of the obstacle is needed, two cameras are required, which is generally called binocular stereo vision. The two binocular cameras maintain a certain distance, just like the parallax of human binoculars, and the offset between pixels is calculated through the principle of triangulation to obtain the three-dimensional information of the object. In addition to helping the car determine its own position and speed, the main function of the binocular camera is to identify traffic lights and signal signs on the road to ensure that autonomous driving follows road traffic rules. However, the binocular camera is greatly affected by changes in weather conditions and lighting conditions, and the amount of calculation is also quite large, which requires very high performance of the computing unit. Commonly used radars on unmanned vehicles include laser radar and millimeter wave radar. Laser radar mainly detects the position, speed and other characteristic quantities of the target by emitting laser beams. Laser radar has a wider detection range and higher detection accuracy of distance and position, so it is widely used in obstacle detection, acquisition of three-dimensional environmental information, vehicle distance keeping, and vehicle obstacle avoidance. However, laser radar is easily affected by weather and performs poorly in rainy, snowy and foggy weather. In addition, the more wires of the laser transmitter, the more point clouds are collected per second, and the stronger the detection performance. However, the more wires, the more expensive the laser radar is. The price of a 64-wire laser radar is 10 times that of a 16-wire laser radar. Currently, Baidu and Google's driverless cars are equipped with 64-wire laser radars. Schematic diagram of LiDAR point cloud Millimeter-wave radar has a narrow beam, high resolution, strong anti-interference ability, and a strong ability of the seeker to penetrate fog, smoke, and dust. It has better environmental adaptability than laser radar, and weather conditions such as rain, fog, or night have almost no effect on the transmission of millimeter waves. In addition, its seeker has the characteristics of small size, light weight, and high spatial resolution. With the development of monolithic microwave integrated circuit technology, the price and size of millimeter-wave radar have dropped significantly. However, the detection distance is directly restricted by the frequency band loss, and it is impossible to sense pedestrians and accurately model all surrounding obstacles. The data processing of ultrasonic sensors is simple and fast. They are mainly used for short-range obstacle detection. The distance that can be detected is generally about 1 to 5 meters, but detailed location information cannot be detected. In addition, when the car is driving at high speed, the use of ultrasonic ranging cannot keep up with the real-time changes in the distance between cars, and the error is large. On the other hand, ultrasonic scattering angle is large and the directionality is poor. When measuring targets at a long distance, its echo signal will be relatively weak, affecting the measurement accuracy. However, in low-speed and short-distance measurements, ultrasonic ranging sensors have a very large advantage. Ultrasonic Sensors GPS/inertial navigation and wheel speed sensors are mainly used to determine the position of the vehicle itself, and their data are usually fused to improve positioning accuracy. Multi-sensor fusion is a very common algorithm in environmental perception modules. It can reduce errors. For example, the edges of images often occur where the depth is discontinuous. By extracting the edges of the two-dimensional image (obtained by the camera) and matching them with the depth information provided by the lidar through co-point mapping, the vanishing point of the road in the two-dimensional perspective image can be matched with the three-dimensional radar information, so that the road surface and the surrounding buildings can be more accurately divided. In addition, high-precision maps are also a strong support for autonomous driving. If there is very accurate map information, the lane lines can be planned directly using the map at one time, which can reduce the task of visually identifying lane lines. Decision-making and planning are the "brains" of autonomous driving Let’s take a look at how graphics card giant NVIDA makes behavioral decisions for autonomous driving. At present, the mainstream decision-making framework is divided into two algorithms: expert algorithm-based decision-making and machine learning-based decision-making, and the latter is increasingly valued and studied. For example, NVIDIA uses a convolutional neural network (CNN) to output the direction control command of the car through the trained convolutional neural network from the original pixel image captured by the front camera of the car. Its driverless car can drive on unstructured roads such as mountain roads and construction sites, and these road conditions are difficult to enumerate. Therefore, it is unrealistic to rely on traditional expert algorithms to use conditional judgment to divide the changing situations. NVIDIA's learning framework is as follows: The training data includes a single frame of video sampled from the video and the corresponding directional control command. The predicted directional control command is compared with the ideal control command, and then the weights of the CNN model are adjusted through the back propagation algorithm to make the predicted value as close to the ideal value as possible. The trained model can generate directional control commands using the data of the camera directly in front. The most critical convolutional neural network (CNN) structure is as follows: The output is the inverse of the turning radius and the input is an image. After the five convolutional layers, three fully-connected layers are added to simulate the brain's neural decision-making. The convolutional layer (Convolutional feature map) is used to extract features and simulate the visual nerve to distinguish different objects. The first layer of the network (Mormalized input planes) normalizes the input image. Performing normalization in the network model allows the normalization process to be adjusted according to the network structure and the processing process can be accelerated using GPU. NVIDIA's technical solution, the end-to-end decision-making method from image input to control (steering wheel angle) output is a black box system. If a problem occurs, it is impossible to find out the cause of the wrong decision like an expert algorithm. Therefore, a more feasible solution is to use neural networks for environmental perception and cognition, such as identifying traffic lights, the posture of people and cars, the drivable area of mountain roads, etc., and then send these processed environmental information to decision-makers for judgment. The type of environmental information can be a map that integrates various obstacle information, or a driving situation map (that is, only giving which area is relatively safe or dangerous, without giving specific information about all obstacles). There are many solutions to try. Which road should you choose? Route planning is important! Path planning is the basis of intelligent vehicle navigation and control. It is considered from the perspective of trajectory decision-making and can be divided into local path planning and global path planning. The task of global path planning is to plan a collision-free, passable path from the starting point to the target point based on the information in the global map database, but it is only a rough path from the starting point to the target point. In the actual driving process of the unmanned vehicle, it will be affected by factors such as the direction, width, curvature, road intersections, and roadblocks of the path, coupled with the uncertainty of the local environment and the state of the vehicle, so it will encounter various unpredictable situations. It is necessary to plan an ideal local path without collision based on the local environment information and the state information of the vehicle. This is local path planning. Local path planning is equivalent to the "brain" of an unmanned vehicle. It obtains environmental information such as roads and obstacles from the perception system, and obtains the position information of the starting point and the target point given by the upper-level decision-making. After processing, it generates a safe and smooth driving trajectory in real time. Since the vehicle's path is a trajectory with time attributes, trajectory planning is usually divided into path planning and speed planning. Path planning often uses spline curves to fit paths that meet obstacle avoidance, maximum curvature, and curvature continuity constraints, while speed planning generates a speed distribution that meets constraints such as maximum speed and maximum acceleration along the fitted path. The final plan is transmitted to the chassis control system in the form of steering angle and vehicle speed data, so that the vehicle can achieve lane following and obstacle avoidance functions. Once everything is decided, all that’s left is the movement of the vehicle! Compared with traditional controllers and actuators, driverless vehicles prefer to use wire-controlled actuators, such as wire-controlled steering, wire-controlled braking, and wire-controlled drive, which can achieve precise control. In local path planning, the autonomous vehicle plans an ideal lane change path after comprehensively considering the surrounding environment, the vehicle status and other constraints, and transmits the instructions to the relevant actuators. If the actuator cannot keep up with the path's requirements for the vehicle's turning angle, it will deviate from the planned path. Therefore, the motion control algorithm is also crucial. What is the future development trend of driverless cars? At present, there are two main technical routes for unmanned vehicles. One is to obtain various information based on the vehicle's sensors as introduced in this article, and the other is to obtain environmental information through vehicle-to-vehicle communication and vehicle-to-infrastructure communication based on 5G communication technology. In comparison, the former does not rely on the transformation of infrastructure and the premise of whether other cars on the market are intelligent, so it is easier to implement. Although unmanned driving can handle 99% of road conditions relatively safely, the remaining 1% requires 99% of engineers' energy to solve. Before 2020, smart cars should still be presented to consumers in the form of ADAS. As a winner of Toutiao's Qingyun Plan and Baijiahao's Bai+ Plan, the 2019 Baidu Digital Author of the Year, the Baijiahao's Most Popular Author in the Technology Field, the 2019 Sogou Technology and Culture Author, and the 2021 Baijiahao Quarterly Influential Creator, he has won many awards, including the 2013 Sohu Best Industry Media Person, the 2015 China New Media Entrepreneurship Competition Beijing Third Place, the 2015 Guangmang Experience Award, the 2015 China New Media Entrepreneurship Competition Finals Third Place, and the 2018 Baidu Dynamic Annual Powerful Celebrity. |
>>: Robots are so popular, but can you imagine the final form of artificial intelligence?
Source: Hunzhi (ID: hey-stone) This article has b...
You really went to Uniqlo to buy it! Clothes! Clo...
What's the tastiest-looking celestial object?...
On October 25, the Mengtian experimental module a...
Do you often have such confusion at work? The wor...
Many people have heard the saying "There is ...
[[153097]] Today, the cost of acquiring users for...
InnovationEye has released the "2021 UK Arti...
Last December, the world's first self-lacing ...
[Introduction] Recently, due to the impact of the...
Produced by: Science Popularization China Author:...
Compiled by: Gong Zixin Type 2 diabetes is a prog...
As the saying goes: "Three meals a day, life...
In recent years, new brands have risen rapidly, n...
Data from the third quarter of 2020 show that the...