Not long ago, Tesla announced that the autopilot systems of the new Model 3 and Model Y models in North America will completely abandon radar technology and instead use cameras to achieve pure visual autopilot. As soon as the news came out, it caused an uproar as expected. The reason is simple. Today, the entire smart car market regards LiDAR as an essential core component for realizing true autonomous driving technology. However, Tesla, as the leader in autonomous driving, actually runs counter to the industry's perception, which inevitably makes people a little confused. But Tesla, which has been working in the field of autonomous driving for many years, certainly did not make this decision on a whim. So what exactly prompted Tesla to completely abandon radar technology and only use cameras as the core components of the autonomous driving function? Where will the direction of autonomous driving technology go? Cameras vs. LiDAR: Different routes, each with its own strengths Before talking about the competition between pure visual recognition solutions of cameras and lidar solutions, we should first understand what hardware solutions are currently used in cars equipped with autonomous driving functions. At present, the mainstream models equipped with autonomous driving technology sold by Ideal and NIO all adopt the "camera + radar" fusion solution, with cameras as the main method and radar ranging as the auxiliary method. After several years of updates and iterations, this solution is currently the most mature, reliable, and relatively low-cost mainstream solution. The recently popular LiDAR solution is upgraded based on this fusion solution. The LiDAR that can achieve higher-level spatial ranging functions is the core of the solution, supplemented by cameras for object recognition, and achieves high-level autonomous driving through spatial perception. The Xiaopeng P5, which was launched in April this year, is the world's first model equipped with a LiDAR solution. Tesla recently announced that it will completely abandon radar technology and only use cameras for visual recognition and ranging to achieve autonomous driving. Currently, only Tesla has chosen this route to develop autonomous driving functions. It is obvious that the debate about pure camera solutions and lidar solutions is actually a debate about the visual perception route and the spatial perception route. So why did Tesla choose the pure visual perception route that is different from Xiaopeng, Weilai and other smart car manufacturers? Tesla's business philosophy has always been to occupy the market on a large scale by reducing the production cost of its superior products, and then obtain low-priced components through a high market share, and then repeat the cost reduction operation. The reason for choosing to completely abandon radar technology is simple. The cost of the visual perception solution that uses only cameras for recognition is very low. The monocular camera currently used by Tesla costs only 150 to 600 yuan, and the maximum cost of equipping a single vehicle with 8 cameras is only 4,800 yuan. Therefore, Tesla can significantly reduce manufacturing costs through pure visual recognition solutions, thereby lowering the selling price of the entire vehicle, quickly seizing the market at a low price, and forming a scale moat. Of course, this is inseparable from Tesla CEO Musk's extreme admiration for "first principles". He believes that since humans can observe with their eyes and drive according to driving conditions, as long as the camera can collect road conditions information by observing the surrounding environment, it can achieve the same effect as human driving, without the need for the expensive radar technology. Indeed, although the lidar solution has obvious advantages in spatial ranging and can achieve extremely accurate centimeter-level ranging through very short laser reflection time, and the autonomous driving system can judge the road conditions more accurately, the high cost makes it difficult to popularize it to every consumer. Although the two technical routes are very different in their implementation methods, it can be clearly seen that the two are not in a substitute relationship, and both are the development directions of autonomous driving technology. The lidar route targeted by the three domestic smart car giants NIO, Xpeng and Ideal is more accurate and has temporary response capabilities for individual unexpected situations than the pure vision route of cameras adopted by Tesla, but it also faces the pain of high costs. Xiaopeng Motors’ official website clearly states that the best 64-line LiDAR in China costs as much as $80,000. Such a high cost is destined to be out of reach for mainstream cars. Moreover, L2+ autonomous driving requires a LiDAR with a wiring harness of more than 100. So, can car companies reduce costs by lowering the camera capability research and development targets of lidar solutions, or simply adopting a pure vision-based autonomous driving solution like Tesla? Single route is unreliable, vehicle-road collaboration is the right way Although both the LiDAR solution and the camera-only vision solution are very good autonomous driving technology routes, it is not feasible to achieve the rapid popularization of LiDAR technology by reducing the camera capability research and development level of the LiDAR solution, or by adopting a pure visual perception route. First, there are still huge problems with the ability of LiDAR point cloud imaging to cope with extreme road conditions. LiDAR's performance will be significantly reduced when encountering highly reflective objects and strong light, which may cause the autonomous driving system to misjudge and cause traffic accidents. Therefore, reducing the camera's recognition level to achieve a low-cost LiDAR solution is not a safe and reliable autonomous driving solution. As for the pure vision solution that relies only on cameras, it will be more dangerous. The pure vision solution relies on the system to identify, analyze and learn the effects collected by the camera, and then achieve a stable autonomous driving effect when passing the same road conditions for the second time. However, it should be noted that the driving conditions accumulated by the system cannot fully cover all emergencies of each user in actual driving, so the pure visual perception route performs poorly in dealing with occasional road conditions, and Tesla's pure visual automatic driving requires the high beam to be turned on all the time, which is obviously incompatible with the social habit of meeting other vehicles at night. So after Tesla announced that it would soon remove the radar system on the Model 3 and Model Y, many evaluation agencies, including Consumer Reports, also expressed their distrust of Tesla's current direct adoption of a pure vision solution by downgrading the safety ratings of Tesla models. This is enough to show that a pure vision autonomous driving solution is not a reliable option. It is obvious that autonomous driving technology will be in the state of camera and radar fusion solution for a long time in the future. Pure visual perception routes and spatial perception solutions cannot achieve a good balance in terms of technology and cost. However, in the meantime, smart cars still have a shortcut to quickly improve their autonomous driving level. In 2019, my country proposed the traffic construction policy of "vehicle-road collaboration". Different from single-vehicle intelligence, vehicle-road collaboration is based on it and enables more powerful road condition information acquisition capabilities. Not only can information be obtained through the camera and radar of a single vehicle, but also roadside pedestrians, building facilities, traffic lights and other equipment can be connected to the car's automatic driving system to transmit real-time road condition information to the car, so that it has the ability to collect information across space. By having spatial information about the road it is about to travel on, the autonomous driving system can not only predict road conditions in advance, but also share information through vehicle-to-vehicle connectivity, thereby enabling large-scale traffic planning capabilities based on vehicles as units and road areas as maps. This will not only significantly improve urban traffic conditions, but also significantly enhance the safety performance of autonomous driving. Previously, Baidu had launched its first vehicle-road collaboration project in Changsha - the RobotTaxi taxi service, and subsequently reached cooperation on smart transportation projects with five cities including Changsha. However, in actual applications, the collection and transmission delay of road conditions information are still very difficult, so Baidu CEO Robin Li also admitted that it would take at least 15 years to achieve the full popularization of vehicle-road collaboration in my country. Generally speaking, the current autonomous driving technology, whether it is visual perception or spatial perception, is not the best choice, because they cannot achieve the best balance between autonomous driving technology and cost. The combination of the two may be the best option. As a winner of Toutiao's Qingyun Plan and Baijiahao's Bai+ Plan, the 2019 Baidu Digital Author of the Year, the Baijiahao's Most Popular Author in the Technology Field, the 2019 Sogou Technology and Culture Author, and the 2021 Baijiahao Quarterly Influential Creator, he has won many awards, including the 2013 Sohu Best Industry Media Person, the 2015 China New Media Entrepreneurship Competition Beijing Third Place, the 2015 Guangmang Experience Award, the 2015 China New Media Entrepreneurship Competition Finals Third Place, and the 2018 Baidu Dynamic Annual Powerful Celebrity. |
Currently, Samsung is facing a downturn in the Ch...
Mr. Z's latest tutorial on how to create a Ti...
Mr. Liu, 50 years old, was in good health and had...
[[409968]] On July 4, the Cyberspace Administrati...
Is it better to use a template WeChat applet or a...
Review expert: Wang Xuejiang, professor of pathop...
Soul is a new generation social APP whose audienc...
An imbalance in fecal virome has been reported in...
Short video tutorial on making copy number of Mis...
Produced by: Science Popularization China Author:...
In the past two years, with Tesla's vigorous ...
Hippocampus Portrait Refining Course 6-hour Portr...
Cook just said there will be a better desktop Mac...
Changba, Moman Camera, Crazy Guess Picture, Face ...