If we were to ask what has scared netizens the most in the past few months, it would be the following virus attacks. In July, the CopyCat virus infected 14 million Android phones; In June, the Petya virus infected more than 60 countries around the world; In May, the WannaCry virus swept the world with great force, and at least 150 countries were attacked. However, in the "Top 10 Viruses in the First Half of 2017" selected by security vendor Rising based on the number of virus infections, number of variants, and representativeness, the terrifying WannaCry virus could only be ranked ninth. The "China Cybersecurity Report for the First Half of 2017" released by Rising shows that from January to June 2017, Rising's "Cloud Security" system intercepted a total of 31.32 million virus samples, with 2.34 billion virus infections. The overall number of viruses increased by 35.47% compared with the same period in 2016. The increasing number of viruses has brought unprecedented attention to cybersecurity, and AI-driven cybersecurity companies have also been favored by capital. In June alone, at least seven companies that use AI for cybersecurity received a new round of financing, with a total financing amount of nearly US$500 million. Network security faces severe challenges "In 2016, the number of Internet users worldwide reached 3.5 billion, accounting for about half of the world's total population. By 2020, the number of terminal devices connected to the Internet is expected to reach 12 billion." This is data from the Global Cybersecurity Index released by the International Telecommunication Union in July 2017. With the widespread use of smart devices, the large-scale popularization of the Internet of Things will inevitably provide a large number of new opportunities for attackers. The boundaries between work and life are becoming increasingly blurred. Once a networked device is compromised, everything from financial information such as banking to personal information such as health may be leaked. In the Internet age, if one device is compromised, other devices may be instantly destroyed. There are precedents for this. In October 2016, a malware called Mirai attacked a large number of IoT devices such as smart cameras, smart gateways, and smart home appliances that had vulnerabilities. After being infected, they instantly became "zombie" devices in the network. In the field of industrial control, the Stuxnet worm virus in 2010 was able to attack Siemens' supervisory control and data acquisition (SCADA) system and spread through USB flash drives and local area networks. Everything is connected, the boundaries between the intranet and the extranet are gradually blurred, and network generalization has become a major trend. For example, Tesla's cars can access WiFi in various occasions and can also access 3G/4G networks. In future traffic, driverless cars will also communicate with traffic lights, traffic stations, and even other cars - which means more potential attack points. "Once connected to the network, many traditional attack methods can attack self-driving cars just like attacking computers. The WannaCry virus can also invade cars, which will cause even greater problems," said Xiao Huang, head of the cognitive information security research group at the Fraunhofer Institute for Applied Integrated Information Security in Germany, in an interview with Machine Power. This shows that network security will face severe tests both now and in the future. As artificial intelligence is applied to various vertical fields, the new challenges facing network security also provide important opportunities for artificial intelligence to show its prowess. In this emerging field, giants have already emerged. Cylance, a company that uses artificial intelligence to predict cyber attacks, is a unicorn valued at more than $1 billion. Its artificial intelligence anti-virus software "Cylance PROTECT" can predict the occurrence of threats. Last year, the company demonstrated a technology that can protect computers from attacks without an Internet connection, using only 60 MB of memory and 1% of the CPU. Artificial Intelligence in Cybersecurity: Anomaly Detection and Improving Efficiency In the field of network security, the identification of threats is not achieved overnight, but a gradual process. Tong Ning, deputy director of AsiaInfo Network Security Industry Technology Research Institute, introduced at the C3 Security Summit held in early July that security vendors initially used black and white list technology to characterize targets as good or bad, and used such one-dimensional features to identify threats. Then came two-dimensional features such as matching strings. If a request contains a certain type of data, it will be considered illegal. After that, there are multi-dimensional features. To identify whether a program is good or bad, let it run first, then monitor its running process, and form multi-dimensional features from the information during the running process for judgment. However, the fatal disadvantage of multi-dimensional feature technology is that it is too expensive and inefficient, so it cannot meet customer requirements. After 2000, with the development of mobile Internet, a large number of devices generated various logs, so there has been great progress in log management and analysis. Machine learning algorithms including correlation analysis have also been widely used. Tong Ning said that in machine learning, supervised learning is an efficient multi-dimensional feature discovery method that is suitable for the prevention and treatment of malicious programs, ransomware, and spam. However, supervised learning also faces challenges: First, the freshness of the model, because threats change every day, and supervised learning is not learning every day. If it does not learn every day, the latest threats cannot be identified. Second, the accuracy of the model. Learning is one thing, but the accuracy when actually used is another. Third, the recall rate of the model, that is, how many threats are missed and how many threats are not caught. Therefore, supervised learning is not a panacea. For example, anti-fraud, situational awareness, and user behavior analysis are more suitable for unsupervised learning. However, unsupervised learning also faces other challenges, because unsupervised learning is generally conducted in the customer's network environment, and is therefore likely to face poisoning attacks. "The advantage of machine learning technology is its multi-dimensional recognition capability. However, no matter how powerful machine learning technology is, it needs to be used in combination with other means to achieve better results," said Tong Ning. Xiao Huang also pointed out that when machine learning is used for network security, in many scenarios, the prediction accuracy cannot reach the false positive standard of 0.000001 that they require. From this perspective, artificial intelligence is only an auxiliary means and needs to be combined with traditional means. However, Xiao Huang believes that using artificial intelligence for network security has another advantage, which is to improve analysis efficiency. The typical role of artificial intelligence is to replace humans in doing a lot of repetitive work. For example, using artificial intelligence to analyze images frees radiologists from inefficient and repetitive work. The same is true for the cybersecurity industry. Data shows that China's current total demand for cybersecurity talents exceeds 700,000, but the number of new talents added each year is only 20,000 to 30,000, with a gap of up to 95%. Moreover, the number of vulnerabilities that an analyst can analyze every day is very limited. "If we don't use automated means, when the number of IoT connected devices explodes in the future, it will be impossible to rely solely on humans to analyze a large number of information security risks." Xiao Huang said that an information security analyst can only look at one or two thousand log data or one or two hundred code snippets a day, while for artificial intelligence, millions of data only take a few minutes. According to Xiao Huang's observation, information security and artificial intelligence are different fields, and their thinking styles are somewhat different. The former is more inclined to system engineering, while the latter is more inclined to mathematical thinking. Therefore, many of Xiao Huang's colleagues believe that artificial intelligence can only solve limited problems, and are more willing to use traditional methods, but they will also think in the direction of analytical automation. "I believe that anyone working in information security must move in this direction." Xiao Huang hopes to use increasingly mature automation methods to improve performance in vertical fields, including the efficiency, timeliness, scale and explainability of analysis. Attack and Defense in the Age of Artificial Intelligence Cybersecurity is a world where the higher the standard, the higher the risk. Security personnel use artificial intelligence technology to block hacker attacks, which in turn will enable hackers to use artificial intelligence technology to launch more complex attacks. And as a large number of artificial intelligence models are open source, hacker intrusion tools are becoming more diverse. Xiao Huang said that with just a little learning, hackers can use open source tools to deceive recognition systems, and the reduction in technical difficulty will encourage many people to become hackers or carry out attacks that were previously impossible. This is not an unfounded worry. There have been cases in phishing emails where hackers imitate human speaking habits and content, making it more difficult to identify when a company or individual has been hacked. Xiao Huang believes that there will be more and more virus variants in the future, which will become more and more difficult to detect, larger in scale, and shorter in time to generate. Adversarial inputs superimposed on typical image inputs can fool the classifier into mistaking pandas for gibbons. In February 2017, OpenAI published a new study that pointed out another major concern in the field of artificial intelligence security: adversarial examples. In image recognition problems, attackers input adversarial examples into machine learning models to create visual hallucinations in the machine, causing the system to misjudge. In the paper "Explaining and Harnessing Adversarial Examples", there is an example: a picture of a panda, after adding artificially designed tiny noise, causes the system to identify the panda as a gibbon. For many years, Xiao Huang has been studying adversarial machine learning and working to overcome the flaws in machine learning algorithms. He analyzed that machine learning algorithms and deep learning algorithms that rely on data have major flaws. Adversarial generative networks take advantage of this flaw and design a new architecture to generate models. "Because current machine learning relies heavily on data distribution, if the data distribution itself is complex, or is artificially made complex, if hackers have the means to generate malicious samples, it will lead to failure to recognize or incorrect recognition," Xiao Huang further explained. Xiao Huang said that if interference is used in the field of autonomous driving, the consequences will be disastrous. For example, on the A9 highway in Germany, where autonomous driving is tested, there are special signs to guide autonomous vehicles. If the roadside signs are maliciously modified to mislead autonomous vehicles that rely on them, it will cause extremely dangerous situations. Xiao Huang believes that due to the defects of the algorithm itself, after the large-scale use of artificial intelligence, network security needs to change its thinking and design new methods. He provides the following path: First, increase the interpretability of the analysis end. Xiao Huang analyzed that if it is a virus threat invasion, it is difficult to solve it using machine learning detection methods. Therefore, it is hoped that in the event of an information security leak, statistical methods can be used to understand the relationship between hackers, how hackers invade the system, what is the attack path, and which link has problems. Find these relationships, or analyze from the perspective of cause and effect diagrams, so as to increase the interpretability of the analysis end. Second, the current machine learning algorithm model is too complex and requires a large amount of data, which results in a trade-off. Xiao Huang believes that there are many ways to reduce the complexity of the algorithm, such as introducing prior knowledge to guide the model to learn in one direction. This will reduce the complexity of the model and require less data. Third, the sharing of information security intelligence is also very important. For example, if a model has a defect, extract the defect and use an efficient method to compile it into another model, and the other model does not have this defect. Xiao Huang believes that this is similar to transfer learning, but transfer learning is the learning result in the middle of the transfer. In fact, the anomalies learned in the middle can also be transferred, thereby increasing the security of the algorithm. Statement: This article is originally produced by Machine Power (WeChat public account: almosthuman2017). |
<<: Learn sorting algorithms in Objective-C
[Zeng Dapeng] Introduction to Dapeng's complet...
With the advent of the 5G era, the network speed o...
In the past, if a small brand wanted to turn arou...
Expert of this article: Li Zhao, Chief Physician ...
How to activate the 400 customer service phone nu...
Recently, many fruits and vegetables have come on...
When the north is covered in snow, the shelves of...
Do you know what shape the moon is? Round? Half r...
On February 1, some media learned from the Chines...
Support multiple screens Android runs on a variet...
As the Spring Festival approaches, Alipay’s annua...
Almost at the same time, the two major search gia...
It is basically a foregone conclusion that the iP...
Chen Hui's 7-Day Yoga Plan Level 1-2 Resource...
According to a message posted on Xinjiang Weibo, ...