Previously, the news that "bank facial recognition systems were hacked" caused the public to worry about its security. How to deal with technical loopholes and challenges? Expert interpretation>> Written by Wang Huishen (intern) Edited by Ding Lin New Media Editor/Lv Bingxin "A fraudster used a 'fake face' to steal hundreds of thousands of yuan from a woman," "Didi was fined 8.026 billion yuan for illegally collecting user personal information (including 107 million pieces of facial recognition information)"... Recently, topics related to facial recognition have frequently become hot topics. As "face scanning" becomes more and more popular, these cases make us think deeply: Is facial recognition an additional safeguard, or an unnecessary safety hazard? In response to this question, Beijing Science and Technology News conducted an exclusive interview with Professor Gao Xinbo of Chongqing University of Posts and Telecommunications. As a professional scholar in the relevant field, Gao Xinbo gave a detailed introduction to the past development of face recognition technology, how to deal with cracks and challenges at present, and how it will develop in the future. How to plug the loopholes in bank facial recognition systems? Beijing Science and Technology Daily: The recent news about “fake faces swiping bank cards” has attracted everyone’s attention to the loopholes in facial recognition systems. How do you think this matter can be interpreted from a technical perspective? Gao Xinbo: There are two ways to collect images for face recognition systems currently in use. One is to upload a two-dimensional image to the system, and the system compares the image with the face image registered in the system in the feature space for identification. This solution is relatively simple, but its loophole is that anyone who uses a qualified two-dimensional face image can be successfully authenticated. For example, in 2019, several primary school students used printed photos to "crack" the face-scanning pickup function of Fengchao express lockers. Therefore, financial institutions such as banks will use the second face collection solution, which is to collect face images and perform liveness verification at the same time to ensure that the verifier is a real person and prevent others from maliciously stealing photos. Specifically, it requires that during the collection of face information, the user must blink, open his mouth, and do other actions according to instructions. But a major loophole in liveness detection is that the system assumes that the face and the live body belong to the same person. If Li Si wears Zhang San's mask, the face recognition system detects Zhang San's face, and Li Si completes the actions required for liveness detection, in this case the system cannot recognize that the "self" is actually Li Si and not Zhang San. There were also news reports that researchers at Stanford University invented a "face-changing" technology that transplants expressions through software. In recent news, criminals used similar "high-tech" to attack face recognition systems and steal bank cards. Beijing Science and Technology Daily: How should we deal with such loopholes? Gao Xinbo: We can use the "visible light + infrared" dual-mode camera recognition method, that is, to detect the appearance of a person with visible light and to detect the heat distribution of the face with thermal infrared. Only when the optical portrait and thermal infrared temperature distribution map are consistent with the individual can the identity be recognized. In this way, even if someone uses software to change their face, wears a mask or looks similar, they cannot impersonate. Furthermore, we can also adopt dual-mode or multi-mode verification methods. In addition to face recognition, we can also add verification methods from different angles such as iris, fingerprint, finger vein, palm print, voice print, etc. This can at least make up for the loopholes of single recognition technology. Facial recognition technology continues to develop with technological progress Beijing Science and Technology News: What challenges has facial recognition technology encountered in its development? What stage has the technology reached now? **Gao Xinbo: **Face recognition has developed rapidly in recent years. The earliest successful face recognition case (based on image recognition technology) came from the Media Lab of the Massachusetts Institute of Technology in the United States. A scholar named Alex Pentland proposed the "Eigenface" method. In simple terms, it is to perform face recognition by decomposing the eigenvalues in the image, but this technology is sensitive to light interference, so it was not put into use at that time. Later, the academic community continued to propose new methods, but the basic idea was to extract features from images and then design a classifier for recognition. The field of feature extraction has made rapid progress in recent years. In recent years, people have mainly used deep learning to do face recognition, combining feature extraction with classifier design, that is, an "end-to-end" recognition method. Nowadays, with the support of big data and deep learning, face recognition has high accuracy and good robustness (robustness, the ability to resist variables). Therefore, face recognition has now embarked on the road of practical application. In the early days of face recognition research, if the accuracy was 1% higher than that of other methods, a paper could be published; now it is impossible to reach 1% - because it has reached more than 99%. In the development of artificial intelligence in our country, one of the most important and successful fields is pattern recognition. It can be said that in terms of image recognition, my country's face recognition technology is in a leading position internationally. Beijing Science and Technology News: We have heard that some people with similar faces (such as twins) can perform facial recognition on each other. Is this problem unsolvable for today's facial recognition systems? Gao Xinbo: In theory, identifying people with similar faces will increase the difficulty of identification and reduce the accuracy, but it is not impossible. In reality, people who look very similar can still be distinguished by their parents or acquaintances, which shows that these faces are still different, and artificial intelligence can definitely recognize them. What we need to do is to strengthen the training of the system, such as collecting a large amount of twins' face data as training samples, so that the neural network can specifically distinguish them during training and increase the recognition strength. However, many organizations do not think it is necessary to invest in technology for individual people. AI deep learning training consumes a lot of energy. Large models have many neural network layers and nodes, and each training will cost a lot of electricity. Therefore, it is not technically impossible to identify people with similar faces. It is just that based on comprehensive considerations such as cost reduction, we might as well use multiple verifications to solve this problem. There is a risk of information leakage, and facial recognition technology begins to focus on privacy protection Beijing Science and Technology News: News about facial information being leaked has caused the public to worry about personal privacy security. How can we better protect relevant privacy? Gao Xinbo: This is a very critical topic. Big data privacy protection is a research direction we are always thinking about. We are currently working on a video surveillance system that protects personal privacy, that is, to achieve "anonymization" in video surveillance. There are two anonymization schemes. One is that the system deforms the face, and the processed face is unrecognizable to humans, but only machines can still recognize it (A stands in front of the camera, but the computer records the non-existent Z). By "fabricating" anonymous faces in this way, privacy can be protected. The other is "face swapping", which is to create a face in the database that is different from the existing face to replace a person, and then embed the original face into it by watermarking or other means. When it is necessary to trace a person, the technology can be used to know who the person is; general systems do not have this technology, and what they can see is not the person himself. Our idea is to add a chip or a module to the surveillance camera to perform anonymization processing so that the photos and videos transmitted from the lens are not of the "self"; at the same time, only public authorities (such as the police) are authorized to have the ability to trace information, in this way to protect our privacy. Beijing Science and Technology Daily: The "anonymization" you just mentioned will make it impossible for humans to distinguish, but machines can still recognize it; so is there a way to make it recognizable to humans but not to machines? Gao Xinbo: Yes. Now there is a new type of technology called "adversarial learning", which is to stick special patches on the face to interfere with machine recognition. These patches are called adversarial samples. If we want to protect personal privacy from excessive collection in the future, we may use patches. Once we stick them on our faces, computers, surveillance, etc. will not be able to recognize them. Some people are working on this technology now. Moreover, these patches can be designed to be relatively small, so that other people can recognize them in real life, but only computers cannot recognize them. Why can't computers recognize something when you put something on it? Because AI has poor "interpretability". In other words, we only know that they can recognize something, but it is hard to explain how they do it, because what they "deeply learn" are the "features" designed by neural networks, which humans cannot understand at all. Therefore, scientists have found that sometimes adding something to a picture makes it difficult for computers to recognize it, so we can use this feature to expand the use of adversarial samples. In fact, there are still many areas worth studying in the field of face recognition. But protecting privacy is the most critical issue at present. In the final analysis, we still hope that science and technology can be used to benefit mankind, rather than to cause trouble. Beijing Science and Technology News: As you said, facial recognition technology has reached a certain level. However, criminals are constantly coming up with tricks to crack it, and facial recognition seems to have become a potential security loophole and hidden danger. What do you think of this? Gao Xinbo: This is also a very important question. Science and technology is like a double-edged sword. On the one hand, it brings convenience to everyone, but on the other hand, it also creates hidden dangers. Science and technology itself has no values. But we humans have values, we know right from wrong, black from white, so the values of science and technology depend on the users. If the users have the right values, then science and technology will bring positive effects, otherwise it will bring negative effects. However, the game between the two will also promote the progress of science and technology, and move forward in the process of constantly solving problems. Since the development of technology always lags behind the problems, we must prevent the problems from happening as much as possible at this stage. This leads to two routes. One is social education. We need to give users the correct value orientation and value guidance; the other is the restraining force of systems and regulations. In addition to establishing rules and regulations, restraining and severely punishing lawbreakers, the law should also give relevant entities strong legal obligations and urge them to protect people's biometric information. In short, we must both "convince people with virtue" and "govern the country according to law." Produced by: Science Central Kitchen Produced by: Beijing Science and Technology News | Beijing Science and Technology Media |
>>: Why is kiwi fruit called kiwi? The truth is that it is a bird!
Zhang Huimei's "Listen to the Sea" ...
Mixed Knowledge Specially designed to cure confus...
Produced by: Science Popularization China Author:...
The original intention of the author to write thi...
“If we want to increase the popularity of our liv...
Goose liver, as an offal food, is not unfamiliar ...
Nearly four months after the Beijing Municipal Hou...
Today I want to ask you a few questions, and you ...
From electric drive to computing power optimizati...
The dream of archaeologists is to record the soun...
In the era of big screens, smart TVs are highly p...
Course Catalog ├──Diet for weight loss | ├──Nutri...
Recently, Microsoft, which has attracted much att...
Recently, the Beijing Municipal Administration fo...
I am Zhang Xiaoer, co-founder of Qingshenghuo san...