Autonomous driving = no accidents? After NIO's first fatal car accident involving a NIO Pilot, discussions on topics related to autonomous driving safety have intensified, and a crisis of trust is affecting all players. There are even two polar representative views: One group believes that in the face of the "autonomous driving accident", all autonomous driving should be re-evaluated, because the technical capabilities have such obvious defects and human lives are at stake. Research and development should be stopped, and promotion should be prohibited to fundamentally prevent similar accidents. The other extreme emphasizes that autonomous driving should not be blamed for assisted driving. "True autonomous driving" will be absolutely safe, and autonomous driving = no accidents. One side restricts, the other side defends. But I'm sorry, cognitive biases exist and may hinder the healthy development of a new technology that benefits the country and the people and benefits mankind. We should neither throw the baby out with the bathwater nor equate autonomous driving with zero accidents. However, it is indeed time to get back to the basics and sort out the historical problems left over from the period of wild technological growth. Historical issues left over from autonomous driving? The most important issue is the classification of levels of autonomous driving technology and the "misunderstandings" exacerbated by translated terms. Amid the heated discussion about the NIO accident, Li Xiang, founder of Ideal Auto, Zhou Hongyi, chairman of 360, which invested in the car manufacturing industry, and Shen Hui, founder of WM Motor, were also discussing issues related to the level of autonomous driving technology. Some people say that they are "adding insult to injury" as friendly competitors, but if you take a closer look at the "professional jargon" and unified terminology initiatives they criticize, you will truly feel that they are spontaneously saving themselves out of a sense of crisis in which everyone is in danger. As long as the historical issues related to the level of autonomous driving technology remain unclear, the current situation of wild growth in the industry and bad money driving out good money will never change. The core of the so-called autonomous driving technology level is this picture: The level here is the "industry technical jargon" of Lji. L2 means level 2, which is the second level. L3 means level 3, which is the third level. This set of "grade" standards is related to an organization called the Society of Automotive Engineers, abbreviated as SAE, which is called the American Society of Engineers in Chinese. When global autonomous driving research and development began to flourish for the first time in 2014, SAE took the lead in formulating a set of autonomous driving technology level standards. The goal is to guide the targeted development of the industry by defining different levels. The formulation of the entire grading standard took into account many influencing factors. But the core can be summarized as one: The degree of involvement of AI systems in car driving. The entire standard is divided into 6 levels. 0, 1, 2, 3, 4, 5. L0, AI involvement is the lowest, basically providing assistance in emergency situations. L5, the highest level of AI involvement, is so high that the entire driving behavior does not require humans at all, and is also called fully autonomous driving. For example, the RoboTaxi driverless taxis that we often hear about now, or smart cars that no longer have brake and accelerator pedals and steering wheels, are efforts towards L5. Among these six levels, there is a key watershed - L4. Below L4, human drivers are ultimately responsible for driving behavior. At L4 and above, the AI system is the ultimate responsible party. L4 is the watershed of authority and responsibility. In other words, below L4, whether it is L2 or L3, it is a human-machine co-driving mode, and the AI system only provides auxiliary capabilities. The difference is that in L2, AI will execute whatever you do, such as lane centering, adaptive cruise control, and you commanding the car to change lanes and then AI will wait for an opportunity to change lanes... all of these are L2. In L2, AI does not "think" at all and does not make decisions actively. The biggest difference with L3 is that AI can start to help drivers make decisions. The simplest example is when to change lanes, and human drivers no longer need to give instructions. During driving, AI will make decisions based on your itinerary and road conditions, and can even help complete most driving behaviors in non-open road scenarios such as highways and loop roads. However, at this stage, humans are still the core decision makers of driving behavior, so special situations and emergency decisions still need to be executed by human drivers. So at this stage, each company’s products have different names. NOA, NOP, NGP...but basically they all refer to AI driving assistance in scenarios where road conditions are relatively controllable, such as highways and loop roads. Citing the terminology and translation terms formulated by SAE, they can indeed be called "L2 autonomous driving", "L3 autonomous driving", and even "L2.5 autonomous driving"... But in the final analysis, they are all assisted driving, and AI provides auxiliary capabilities. The final decision on driving still lies with the car owner. This also means that if an accident occurs, the provider of this "AI assistance" capability has the loophole of "the final driving decision belongs to the car owner". The problem is that autonomous driving itself is a new thing, and the technical level standards are even more of an "industry jargon." Ordinary car owners and users lack a complete understanding of them... In addition, technology providers with vested interests may only talk about "autonomous driving" before sales, using cognitive biases to mislead. This is why there are such jokes and replies as “Automatic driving before purchase, assisted driving in case of accident” and “The owner failed to use it correctly”. So now that an accident has happened, the SAE technical grade standards and translation issues are indeed to blame. This is also why it is time for the industry to unify its terminology—user trust and regulatory confidence cannot withstand the challenge of similar accidents. But objectively speaking, SAE is not entirely to blame. Because when they formulated the grade standards in 2014, they saw the ultimate goal of RoboTaxi, but they could not foresee that two different routes would be developed around the development of autonomous driving technology. Two major routes for autonomous driving? Tesla routes and Waymo routes. When SAE first set the standards, these two routes were simply different propositions for promoting autonomous driving. It has not yet reached the stage where it will bear fruit and have actual impact. These two routes are essentially implemented with L4 as the watershed. As I said earlier, L4 is a dividing line as to whether humans play a decisive role in driving behavior. Among them, Tesla is the representative, hoping to iterate from L4 and below to L4 and above, which was later called mass-produced autonomous driving. Waymo in the Google Group, for example, believes that only by starting directly from L4 is it in line with the "original intention of safety", and later it was further differentiated as fully autonomous driving. There have been frequent attacks and verbal battles between the two major propositions and lines. Moreover, the pioneers and earliest practitioners of these two routes are actually Google. But later, Google decided that it could not go the route that later became known as the "Tesla route." Because of Tesla's route, it is believed that autonomous driving can be continuously upgraded under the condition of human-machine co-driving and driving data iteration, and eventually upgraded from L2 to L5. Tesla’s so-called “shadow mode” is to allow AI to learn human driving behavior in a state of human-machine co-driving. For example, when the AI is driving, it suddenly encounters difficulties and humans take over to complete the challenge. This will be marked by the system, and then the AI model will be trained and learn similar problems, and then the model will be improved. In fact, Tesla's route continues to demonstrate its possibility and feasibility. Over the past few years, Tesla's autonomous driving capabilities have made significant progress. Moreover, the advantages of Tesla's route are concentrated in cost and large-scale data iteration. Wherever the car is sold, it will start the autonomous driving "road test". The core requirement is to sell more cars and the software and hardware costs of the "autonomous driving" solution are within the user's affordable range. In addition, there are obvious paradoxes in Tesla's route. On the one hand, mass-produced autonomous driving can replace human drivers in some scenarios, but on the other hand, it requires drivers to take over the car in an emergency. This requires a person who has been relaxing for a long time to always focus on the journey. Let’s change an experience that more people have had. It is allowed to be distracted in class, but as long as the teacher calls on you to ask a question, you can give an answer immediately. The bitter experiences of many car owners show that drivers who are in a relaxed state will watch videos, fall asleep, and over-trust the capabilities of Tesla's visual perception system. So when the teacher calls on them to ask questions, they are often caught off guard. However, driving on the road involves life safety, and the price can sometimes be extremely high. Musk previously said that once car owners use autonomous driving, there is no going back. It was complained that some car owners really "can't go back". And Tesla is not an isolated case of this paradox. As long as it is a human-machine co-driving state, you cannot ask all drivers to relax on one side while the other side can take over at any time. People have always been the most uncertain factor in driving behavior. It is precisely because of the insight into human nature and "paradox" that Google (Waymo) took the initiative to change its course and decided to challenge the more difficult fully autonomous driving route. As the pioneer of this wave of commercial autonomous driving in the world, Google initially hoped to build a human-machine co-driving system to gradually achieve the ultimate goal of L5 of RoboTaxi. But later in the development and testing phases, it was discovered that as long as the biggest uncontrollable factor in driving decisions exists - humans - there is no real safety guarantee. At that time, as the R&D system became more and more perfect, the number of times the test car was taken over after it was put on the road became less and less, causing some engineers to simply run to the back seat... So Google began to rethink the path forward for autonomous driving. Because from the original intention, Google decided to invest heavily in autonomous driving in order to use stable and skilled AI driving to improve the overall driving level of humans and effectively reduce a large number of traffic accidents. Safety is both the starting point and the final destination. However, if it is impossible to control the overall situation 100%, for example, even if a relatively mature AI driving system is given to an irresponsible human driver, in the end, due to the driver's excessive trust and inability to effectively take over when encountering extreme scenarios, it will cause a safety accident. Of course, there is also a saying that "accidents" are also the process of debugging the AI system. But isn’t it too cruel and ruthless to promote autonomous driving in this way, where one general’s success comes at the cost of many lives? So Google is Google for a reason, they decided to take the harder path: Directly L4. Directly remove humans, who represent uncertain factors, from the driving seat. The final product form is RoboTaxi. Therefore, this direct L4 route is called the Waymo route. This is a route of road testing first, and then mass production and commercial use after a mature system that is sufficiently safe and secure is established. The advantages of this route are obvious. Safety is paramount, and cost is not a high priority. The most redundant sensor solution can be used to ensure safety - after all, the car is not sold to the C-end. But the disadvantages are also obvious. It cannot be promoted so quickly, and the speed of commercialization and scale-up is also strictly limited. On the Waymo route, all R&D and implementation costs need to be borne by itself, and large-scale profitability is even more distant. So in the past two years, Tesla and Waymo have encountered polarized treatments in the capital market. Tesla's market value and stock price have continued to rise, while Waymo's valuation has been repeatedly lowered. And it is also in this process of growth and decline that a third route has emerged: Cruise route. To put it bluntly, it is the "fusion route", which is to use Waymo's "L4 autonomous driving technology" on mass-produced cars after taking into account the cost. In short, on the one hand, we see the power of "data iteration" of Tesla's route, and on the other hand, we hope to enhance safety through Waymo's route. But even so, as long as we don't get rid of the "human-machine co-driving" model, the paradox of human nature and the issue of attribution of rights and responsibilities will never be eliminated. Safety hazards cannot be completely eliminated. So the question is, since autonomous driving at this stage cannot fundamentally solve the safety problem, why should it be developed? Or spend so much money and provide so many favorable policies to support development? Autonomous driving does not equal no accidents, but only autonomous driving can fundamentally reduce accidents Yes, there is an idealistic view that: Develop autonomous driving to eliminate traffic accidents in the world. That’s right, but also not right. Yes, because if the ideal situation is really achieved and all the cars on the road are self-driving vehicles, there may be no more behaviors such as "cutting in", "uncivilized driving" and "illegal and violation of traffic rules". Every car will be civilized, every car will be safe and confident on the road, and the traffic order will be as orderly as an automated assembly line. But the reason why it is wrong is that this idealized view does not conform to the laws of mathematics and technology. In essence, autonomous driving and machine driving are a computer problem and even more a mathematical problem. In this problem, there are always some subtle variables that make it impossible for the probability of an accident to be completely zero. What’s more, given the complex long-tail scenarios and conditions faced by autonomous driving, the probability of accidents cannot be zero. Therefore, it is not true that autonomous driving = no accidents, and no one can guarantee it. However, even so, autonomous driving still has higher safety and is currently the most fundamental way to eliminate human traffic accidents. What is the current status of traffic safety dominated by human driving? According to data from the "9th World Traffic Safety Day" in December last year, in China, where the number of motor vehicles exceeds 360 million and the number of motor vehicle drivers reaches 450 million, tens of thousands of people die in traffic accidents every year. To put it more intuitively, on average one person dies in a traffic accident every 8 minutes. Globally, according to data released by the WHO in 2018, the number of road traffic deaths worldwide is as high as 1.35 million each year, which means that one person dies in traffic every 25 seconds. Among them, the main cause of traffic accidents is the biggest uncertainty factor - human drivers. According to statistics, both in China and abroad, the top 10 traffic violations that cause accidents are: failure to give way as required, speeding, driving without a license, drunk driving, failure to maintain a safe distance from the vehicle in front, driving in the wrong direction, violating traffic signals, driving under the influence of alcohol, illegal overtaking, and illegal meeting. At the same time, there are also traffic violations such as drunk driving, "three speeding and one fatigue", and running red lights. But if they are replaced by AI drivers, these hidden dangers and factors will be fundamentally eliminated. AI drivers will not have the same problems as human drivers, such as fatigue driving, drunk driving, emotional driving, and various dangerous and uncivilized driving behaviors. It also has advantages that humans cannot "replicate on a large scale". The development of one AI veteran driver means that hundreds of "AI veteran drivers" are ready. Correspondingly, for humans to develop from novice to veteran driver, they cannot avoid linear time and the hardships on the road. So let’s be frank. Although autonomous driving cannot guarantee that there will be no more accidents in the world, it can replace the biggest uncertainty factor in traffic and will definitely make traffic safer. In addition, there is another set of data for comparison. Among mass-produced autonomous driving routes, Tesla has the highest accident exposure rate, and there is almost no place to hide when an accident occurs. So some people are curious and have counted the accidents and casualties caused by Tesla and AutoPilot. Statistics show that since the first Tesla accident was exposed in 2013, the number of deaths caused by Tesla accidents worldwide has reached 201, of which 9 were related to AutoPilot. The world's only fatal accident involving an autonomous vehicle was caused by Uber during testing on a Waymo route. In comparison, 1.35 million people die on the world's roads each year. Although autonomous driving is still a new industry in its infancy and is far from being widely used in terms of quantity, the above two sets of data and frequencies can still provide some insight into the situation. So even if autonomous driving does not mean no accidents, only autonomous driving can fundamentally reduce accidents. What's more, nowadays there is more and more redundancy to enhance security. For example, sensor redundancy and safety redundancy on the vehicle side, redundancy during operation, and the higher-dimensional redundancy brought about by China's rapidly advancing road-side infrastructure - vehicle-road collaboration. With these multiple redundant safeguards, if the car truly reaches the "full autonomous driving" state, the uncertainty in accidents will be further approached to zero. It is envisioned that once these infrastructures are completed, it can be ensured that all vehicles on the road are autonomously driven, there will no longer be a mixture of humans and machines driving together, and there will no longer be a mixture of human-driven vehicles and autonomous vehicles. Human drivers, the biggest uncertainty factor in road traffic, will no longer exist. Just like cars replaced horse-drawn carriages, human driving behavior has become a recreational activity, just like riding horses. If it is not a designated road, human driving behavior will even be illegal. Autonomous driving will completely take over urban travel networks. In this network, electric vehicles (new energy), shared travel, vehicle networking, and intelligence are all connected in series. The car can be used at any time without parking, and it will automatically be scheduled to charge when it runs out of power. It has truly achieved high efficiency, greenness, safety, low carbon, environmental protection and sustainability. The car has truly become a "carrier of use" rather than an "owner", and private cars have since withdrawn from the stage of history. Under this network, autonomous driving vehicles can be operated and dispatched in an integrated manner, just like the current communication operation model. This is why there is another unspoken view in the industry that behind autonomous driving, there is also a "standard" dispute like 5G. Because autonomous driving is not a single technology, it is a system ecosystem that is built from scratch and from individual cases to large-scale implementation. Moreover, this is not a system that is only needed in one region or one country, but a system that is needed everywhere in the world. For this system, whoever possesses a complete solution and technical capabilities first can become the standard setter and lead the development of the entire implementation ecosystem. This is why autonomous driving ranks high among the United States’ restricted exports of advanced technologies, which are in high demand. However, the technological development and iteration of autonomous driving cannot be separated from large-scale data and rich scenarios. This is also the key reason why China is increasingly showing its advantages in the implementation of autonomous driving. We were originally on the same starting line as the rest of the world, and now we have the richest and most challenging road conditions data and scenarios. This is why the development of autonomous driving should not be stopped because of fear of choking. Throwing the baby out with the bathwater not only fails to fundamentally solve the problem, but also misleads the country and the people, and has long-lasting negative consequences. Fundamentally, autonomous driving is not only an advanced technology, but also concerns future-oriented global standards and discourse power such as smart transportation and smart cities, and is even one of the new driving forces for economic development and GDP. However, at the current stage of autonomous driving development, there is still room for improvement in terms of safety path. What else can we do to develop autonomous driving more safely? Now that the problem has arisen, it is time to promote consensus amid heated discussions. In fact, at the regulatory and policy level, China may already be the most cautious country in the world. Especially in the previous qualification and license issuance for autonomous driving and RoboTaxi testing, the procedures and processes were much stricter than those of the California DMV in the United States. According to China's only official autonomous driving road test report: "Beijing Autonomous Driving Vehicle Road Test Report", the assessment standards are quite strict. Both the length of public roads and the degree of standard setting are meticulous. A 64,827-kilometer unmanned test verification was also set up to confirm the feasibility of the test technology and the reliability of the test methods and parameters. Earlier, road tests of autonomous driving with safety officers also made clear requirements from multiple dimensions such as license issuance and operation. It is also under this kind of supervision that Baidu Apollo, which has the most road test mileage in China, has completed the operation of RoboTaxi in multiple cities and has achieved zero accidents after 14 million kilometers of actual road tests. Overseas, Waymo once again set an industry precedent by releasing an “Accident Report” last year. The accident situations that occurred during the RoboTaxi implementation period from early 2019 to September 2020 were disclosed. It shows that of Waymo's 6.1 million miles (9.82 million kilometers), there are 65,000 miles (100,000 kilometers) of fully unmanned testing without a safety officer, with a total of 18 accidents and an additional 29 potential accidents that were avoided by intervention by safety officers. But these accidents have two common characteristics: First, none of the cases were serious enough to threaten life. Second, there were no active accidents, all were passive accidents caused by other human-driven vehicles. Therefore, for autonomous driving in the RoboTaxi direction, from supervision to testing, there has always been a data-supported impression of safety and reliability. But mass-produced autonomous driving - or more accurately, assisted driving that requires human-machine collaboration - previously had a gray area and was in a state of wild growth. At one time there were only moral standards but no strict regulatory laws. But those days are becoming a thing of the past. Domestically, on August 12, the Ministry of Industry and Information Technology issued the "Opinions on Strengthening the Access Management of Intelligent Connected Vehicle Manufacturers and Products", which made clear provisions on the data and security of smart vehicles. The most eye-catching of these is the product management involving autonomous driving functions, which has clarified different levels of requirements before and after use to strengthen the safety responsibilities of automakers. It is also clearly stipulated that OTA of autonomous driving-related functions must be approved first. Overseas, Tesla AutoPilot-related accidents have also begun to be investigated by US regulatory agencies. Therefore, even for "mass production of autonomous driving" which is on the edge of the track, the wild growth period is over. However, before the NIO accident, there were debates and verbal battles between the two major routes of "fully autonomous driving" and "mass production of autonomous driving", and both sides thought they could fight on their own. However, who could have imagined that in the crisis of distrust after the accident, public opinion had no "line" and everyone was seen as being in the same boat. When one prospers, all prosper; when one suffers, all suffer. No one can be immune to his or her own fate. Whether it is L-level technology, whether it is the mass production autonomous driving route or the fully autonomous driving route. Whether it is a car manufacturer, supplier, or system technology player. So it's all about time. It is time to learn from this accident and face up to the existing crisis of cognition and trust. And only when all players unite and reach a convention can development be safer and more sustainable. At least it’s time to change, and there are three things I can do to start: First, standardize the use of industry terms, unify expressions, and clarify the ownership of rights and responsibilities. Any time when humans and machines co-drive and the driver or car owner needs to take over urgently is called assisted driving. Otherwise, if an accident occurs, regardless of whether "the vehicle owner fails to take over in time" or not, the capability provider shall bear the responsibility. Autonomous driving can only be promoted when there is no need for the driver or car owner to take over urgently, or the responsibility is clearly that of the technical service provider. Second, when assisted driving is used in a vehicle, multiple safety redundancies must be coordinated. Not only is it necessary to clearly explain the pros and cons of the assisted driving system to car owners before use, but also to strengthen access assessment and supervision of the function users. It is also necessary to prevent unreliable car owners from easily bypassing the necessary rules for turning on "assisted driving", as well as dangerous driving behaviors such as taking their hands off the steering wheel and not paying attention. In addition, driving behavior monitoring such as DMS should become standard for assisted driving, not only to protect the safety of car owners, but also to ensure the safety of other road traffic participants. Behaviors such as installing cheating devices without permission should be treated the same as dangerous driving. It is also necessary to actively call for the crackdown on the sale of cheating devices, and call for legislation and supervision, so that major retail platforms can crack down on such behavior that is equivalent to stealing money and killing people. Third, regularly and proactively disclose security mechanisms and data. Safety mechanisms can enhance trust in the industry, and "safety\accident reporting" can make everything more open and transparent. Only by getting rid of the mentality of "being afraid of accidents" and "keeping things secret" can the industry truly develop in a healthy and benign manner. This is also a necessary means to continue popularizing science and help the public understand the capabilities, status and stages of autonomous driving more accurately. In short, with accidents and a crisis of trust happening now, the alarm bells are ringing for all autonomous driving players. If we cannot develop and promote autonomous driving safety and trust in a more codified way here and now, when a bigger crisis comes, no one will be able to stay out of it. If we continue to argue endlessly about industry terms such as "assisted driving" and "autonomous driving", if every "autonomous driving accident" becomes the focus of public opinion, if industry players have to "exaggerate" in order not to appear to be lagging behind... When an avalanche comes, no snowflake is innocent. It's time for everything. As a winner of Toutiao's Qingyun Plan and Baijiahao's Bai+ Plan, the 2019 Baidu Digital Author of the Year, the Baijiahao's Most Popular Author in the Technology Field, the 2019 Sogou Technology and Culture Author, and the 2021 Baijiahao Quarterly Influential Creator, he has won many awards, including the 2013 Sohu Best Industry Media Person, the 2015 China New Media Entrepreneurship Competition Beijing Third Place, the 2015 Guangmang Experience Award, the 2015 China New Media Entrepreneurship Competition Finals Third Place, and the 2018 Baidu Dynamic Annual Powerful Celebrity. |
>>: Why don't Chinese VCs invest in chips? Investors: The profits are about the same as selling soap
Having worked in operations for several years, I ...
Author: Wang Weiyang (School of Astronomy and Spa...
In the past few months, Douyin has been imposing ...
[[164900]] I don't usually write articles lik...
Do you have friends around you who love to drink ...
After several days of searching, this set of emot...
Copywriting has always been a very popular job : ...
Tesla Model Y is a mid-size SUV under Tesla and o...
A hundred years ago, Japanese mathematician Soich...
Produced by: Science Popularization China Author:...
Double 11, Double 12, Christmas Eve and other fes...
As spring arrives and the temperature rises, are ...
The proposal of the "dual carbon" goal ...
Exploring the unknown and pushing the limits are ...
March 22nd to March 28th is China Water Week. On ...