Who will protect the security of facial recognition?

Who will protect the security of facial recognition?

The first public crusade against facial recognition in China started with a lawsuit.

On October 28, a university professor took a park that used a facial recognition authentication system to court on the grounds of "protecting privacy."

The reason is simple:

A zoo in Hangzhou is changing its previous fingerprint authentication system to facial recognition authentication, but this change has angered Guo Bing, a distinguished associate professor at Zhejiang Sci-Tech University who has already applied for an annual pass. He gave a very good reason:

Why do you assume that we all agree to provide you with our facial information without asking for my opinion?

The park's official response was also very interesting, because they were quite puzzled. When it came to "face recognition", everyone exclaimed "my private data was stolen", but why was fingerprint, phone and ID card information left untouched?

In fact, this question is not only curious to the park, but also to the editor of Leifeng.com. Why are people so worried about facial recognition?

Facial recognition has entered every corner of society, but supervision still needs to follow up

First of all, we admit that this is indeed a good technology that is both scientific and convenient for the people, but that does not mean it has only benefits and no harm. Every once in a while, face recognition always enters the public eye with a "negative image" and is discussed by people. The recent sensational "ZAO" face-changing APP incident, where 3,000 faces were sold online for 8 yuan, etc., constantly challenge the bottom line of the public.

In the past few thousand years of acquaintance society, the face is the foundation for people to establish relationships. We rely on a face to establish family, friendship, and love relationships, and also rely on a face to establish trust and cooperation. Today, this face still retains these functions, but has a new meaning. When our face is tied to our personal bank account and personal identity information, the face is no longer a face in the traditional sense. It has huge economic and information value in itself, which requires us to be cautious about our personal biometric information such as our face and fingerprints.

Secondly, when the fire of AI burned through the narrow track of facial recognition, giants outside the industry such as BAT first came to disrupt the market with capital and brand power; then the veterans in this industry were not to be outdone and "rose up" one after another, and a facial recognition landing war began.

Many enterprises and institutions are actively embracing new technologies. Some communities, schools, companies, scenic spots and other public places have adopted new technologies such as face recognition and fingerprint recognition, which are convenient for management and can improve efficiency. However, these enterprises and institutions have overlooked a more important issue: when they hold a large amount of personal information, can they ensure that this information will not be leaked or misused?

Almost everyone knows that when a technology is worrying, supervision must take action. Relevant laws and regulations should be introduced as soon as possible to regulate the technology involving personal biometric information, and enterprises that have used relevant technologies should be audited. Those that should be banned should be banned, and those that should be rectified should be rectified. Supervision and prevention should also be strengthened, such as formulating relevant access rules, standardizing industry standards, and regulating the use of such technologies based on laws and regulations. Only in this way can we prevent the abuse of people's personal biometric information and truly protect the public.

But for now, the situation is not optimistic.

On the one hand, the policy's privacy protection efforts are not strong enough to deter privacy leaks in facial recognition.

In June this year, the National Information Security Standardization Technical Committee issued the "Protection Requirements for Information Technology Security Technology Biometric Information (Draft for Comments)", in which biometric data is defined as "biometric samples, biometric features, biometric models, biological properties, biometric features of original description data, or aggregation of the above data", and "face" is listed as one of the physiological features that can be used to identify an individual. However, my country has not yet clarified whether photos carrying facial images will be considered as personal sensitive information.

According to the General Data Protection Regulation ("GDPR"), the data protection law in the European Union, facial images constitute "biometric data" under a special type of personal data, and are therefore subject to higher protection requirements than general personal data. According to its definition, "biometric data" refers to personal data about the physical, physiological or behavioral characteristics of a natural person that is obtained through specific technical processing, and the personal data is capable of identifying or confirming a specific natural person. The GDPR further points out that processing photos is not necessarily considered to be processing biometric data, but is only considered to constitute biometric data when photos are processed through specific technical methods so that they can identify or confirm a specific natural person.

Therefore, in general, photos containing facial images do not necessarily constitute biometric information/personal sensitive information that is subject to a higher level of protection. Instead, they are subject to higher compliance requirements when they are processed through specific technologies to have personally identifiable attributes.

On the other hand, companies do not pay enough attention to privacy protection, and the public's awareness of privacy protection is a bit slow.

Most of the face photos sold in the market are used to train face recognition models. Those who need to train face recognition models are technology companies that develop face recognition technology. Face recognition is one of the most active application areas of artificial intelligence technology. The larger the amount of data, the more helpful it is to train more accurate recognition technology.

The larger the amount of data, the more difficult it is to protect privacy. Moreover, as the application of facial recognition technology enters a "blowout period", there are many "dog meat under the guise of sheep meat", and there is a large industrial bubble. Take the most common "attendance machine" as an example, the value ranges from hundreds of yuan to tens of thousands of yuan. Some popular "thousand-yuan machines" can be identified as the same person as long as the data collected from the face matches more than 60%. The market is mixed, and many facial recognition technologies under the banner of "artificial intelligence" are difficult to distinguish between true and false, and they take advantage of the fire.

Liu Jingjing, co-founder of Digital Alliance, mentioned: "Generally speaking, listed companies usually have a complete security team, but some companies, although developing rapidly, still have weak awareness of security and risk control. The company's risk control team may only have one or two people, and even lack complete equipment, accounts, and face recognition and other security mechanisms. An obvious trend is that whether it is an Internet giant or an artificial intelligence unicorn, they are keen to expand their business, but invest very little in data security. This is one of the important reasons for the current information leakage. Once someone goes astray, our information is easily leaked."

Supervision is not in place, companies do not pay enough attention to privacy protection, and personal privacy awareness is still in its awakening stage. In such a situation, the best way we can think of is to save ourselves - to do the opposite. Since you have a way to "steal" my information, then I also have a way to "cheat" you.

Little did they know, it would only make the public even more worried.

Beware of technical vulnerabilities. Security is a long-term battle.

In 2018, a Japanese company called REAL-f produced a 3D mask. According to Reuters, the company's founder, Osamu Kitagawa, spent two years researching and developing the mask, which is made of resin and plastic and uses high-quality facial scans to create a mask that includes tiny wrinkles and blood vessels on the skin.

It is reported that the company receives about 100 orders each year, mainly for testing products that require facial recognition functions. For example, a Japanese car company ordered it to detect when the driver nods. For the consumer electronics industry, this ultra-realistic mask is also meaningful, such as testing the security of smartphone facial recognition functions.

Coincidentally, on December 12, 2019, Fortune magazine reported that Kneron, an artificial intelligence company in San Diego, used high-definition 3D masks and photos to successfully deceive facial recognition systems in many parts of the world, including China's WeChat and Alipay. The team also claimed that they used the same method to enter a Chinese train station.

[[286537]]

Subsequently, both Alipay and WeChat made timely explanations. Alipay stated that such a situation has not yet occurred in the country. They had wanted to know more information before, but the other party had removed the relevant videos and news, so there was not much information.

WeChat officials also said that there has been no technology theft, and that they use various technologies to prevent masks, videos, photos, etc. from being concealed. Both companies have stated that if similar facial recognition payment problems occur, they can request full compensation.

This is a reassurance for everyone. If there is a case of fraud, at least there is a way to remedy it. But no one wants their funds to become prey in the eyes of others.

In response to this, netizens also joked: I don’t know whether AI was fooled or not, but I feel like a person in a condom, and my "face" is everywhere.

In addition to using 3D masks, someone has also discovered a bug in facial recognition. As long as I put a talisman on my face, the facial recognition system will not be able to steal my face. It can also be migrated to other AI, making the deception endless.

Of course, the hacking of facial recognition payment should not be a source of anxiety for ordinary users, nor should it be a denial of technological development. It’s just that such an incident can indeed serve as a reminder to relevant technical developers to continuously improve facial recognition technology. However, in this anti-AI scam, no one may be the ultimate winner. The best solution is to let technology continue to dispel the public’s panic and concerns. So how to protect the public’s privacy from being violated with existing technical means? Leifeng.com interviewed several industry experts to talk about how to use technology to dispel the public’s panic.

How can technology alleviate public concerns?

In the absence of performance standards such as technical accuracy, how can we protect user authorization from being stolen? How can we solve problems such as counterfeit identities?

Du Xu, vice president of Core Shield Times, believes: "Verification accuracy needs to rely on multiple factors of authentication. Face is only one dimension of verification. From the technical point of view of identity verification, at least two dimensions are needed to more clearly verify a person's identity. These dimensions include what the individual knows, holds, and owns, as well as behavioral information related to the individual. Among them, what is known includes knowledge that increases with age and life experience. The verification methods involved are sliding verification codes, graphic verification codes, challenge responses, mathematical knowledge, etc., as well as account passwords, SMS passwords, etc.; what is held includes physical devices such as mobile phones, U shields, Bluetooth shields, door cards, etc.; all" refers to the innate characteristics of people, such as fingerprints, voiceprints, faces, irises, etc.; behavioral information is common locations, common times, common behaviors, etc.

It is worth noting that two-factor authentication in the same dimension is not safe, such as face and fingerprint, account password and SMS verification code. Only the cross-dimensional factors can comprehensively improve the security of verification. "

Jiang Wuming, director of the EyeSight AI Research Institute, also expressed the same view. In his opinion, the development of technology not only maximizes the recognizability of human faces, but also continuously explores the value behind human faces. Through face recognition technology, facial information can not only be used to accurately identify "who I am", but also to compare "whether I am me"; even, by combining big data technology, it can be known and predicted "what kind of person I am" without knowing "who I am" at all.

Therefore, from a technical perspective, the use of facial recognition information may bring higher security risks. Compared with other types of personal information, facial recognition can be achieved through long-distance and relatively covert operations. The collection of facial images is more likely to be carried out in a way that the person being collected is not aware of, and there is a greater potential for technical abuse. However, starting from a single authentication technology, a machine learning system can never be truly unique like a password. In view of the current situation, people generally try to choose different combined authentication methods according to the security level requirements during the entire authentication process. Under the requirements of different security levels, balance the scales of convenience and security to make a more reasonable solution, such as increasing the security level of identification through multimodal combined authentication, and building a complete biometric security system to ensure security, so this problem is still solvable.

In addition to using technical means to dispel public panic, it is more important to let the public truly understand this technology. Because when a new technology emerges, most people focus on how it will make our lives smarter, but few people clearly state whether this technology is safe.

In this regard, Du Xu, vice president of Xindun Times, told Leifeng.com (official account: Leifeng.com): "Technology itself is neither good nor bad, technology is neutral, but people who use technology have a stance. The security problem of facial recognition technology does not lie in the technology itself, but in its application. The danger lies in using this technology to achieve goals that should not be achieved and to implement behaviors that should not be implemented.

Facial recognition technology is, after all, a human image recognition algorithm. It pursues increasingly accurate recognition technology rather than recognition environment. Current fraud is a deception after understanding the algorithm. It is actually a targeted attack and should not be considered an area that facial technology should guard against. These attacks will disappear with the update of facial technology. Of course, new attacks will also appear. In theory, they are all improvements and attacks on the algorithm. Therefore, security issues and algorithms should be two dimensions. What is important is how users should use them. "

A security practitioner also told Leifeng.com that "in fact, privacy infringement mainly refers to problems in the collection, storage, use and sharing of sensitive data in online services and transaction systems. This has nothing to do with face recognition and other biometric technologies, but is an application system problem. In addition to higher accuracy and anti-counterfeiting, there is no essential difference between face recognition and fingerprints. We can think of it as an upgraded version of the key."

"If it is not linked to other 'information', it will not cause much harm in itself, but if you can also obtain the person's ID card, phone number, specific consumption and travel information and behavioral traces, then the harm will be very direct."

For example, if your home also has a facial recognition lock or fingerprint lock installed, after combining the above information for larger-scale data collection and analysis, "professionals" can actually easily visit every corner of your home.

What’s even more interesting is that even though too many technology companies claim to use a lot of “unique technical secrets” to protect user data privacy, in fact we still cannot have an intuitive and clear understanding of these technical initiatives, let alone judge how effective they are.

Even from a realistic perspective, their attitude towards data is completely contradictory to the "vision" they speak of.

It seems that the privacy and security battle between facial recognition and the public will continue for a longer time, but what is certain is that with the continuous advancement of technology, the security of facial recognition will continue to improve, and the public's privacy concerns will gradually diminish.

<<:  In the iPhone era, facial recognition technology is hard to ban

>>:  WeChat Moments now supports posting emoticons and setting friend permissions: chat only

Recommend

The difficulty in reducing car weight lies in the manufacturer's facilities

In recent years, automakers have made significant...

New Android virus may force you to change your phone

Security company Lookout recently issued a warnin...

Radish pictures: the king of search engine optimization

SEO, also known as search engine optimization, is...

BOM cost announced for the first time. Who pays for LeEco's super phone?

April 14 (Reporter Zhang Zhichang) After being aw...

VR Chronicles: What did VR devices look like 50 years ago?

Although the VR field has only started to flouris...

Li Jiaqi’s private domain traffic gameplay!

The author disassembled Li Jiaqi 's private d...

Download Feng Qingyang's Stock Market Practice Course 1 from Baidu Cloud

Feng Qingyang's Stock Market Practice Course ...

Brand planning strategy!

I recently read some brand planning proposals mad...

Three ways and strategies of bringing goods to new media Douyin

5,000 orders of Shiseido red kidneys in 3 minutes...