AI face-changing used to only bother celebrities, but now it will directly affect the property safety of ordinary people like us. There are many news reports about fake faces to steal accounts and transfer money. In August this year, two suspects were arrested by the police for using mobile phone software to create dynamic facial recognition videos and "disguised" themselves as logging into other people's online accounts to steal funds. Another news item mentioned that because the production is simple, the price of a video is only 2 to 10 yuan, and "customers" often buy hundreds or thousands of them, so there is huge room for profit. | Source: Beijing Youth Daily According to news reports and court rulings, the illegal profits of such cases range from several thousand to several hundred thousand yuan, and many similar cases have occurred across the country. Some criminals even used the bank's facial recognition authorization function and then used Trojan viruses to intercept SMS verification codes to steal deposits, with the total amount involved exceeding 2 million yuan. If you can’t fool people, you can fool the camera! AI face-changing, also known as Deepfake, is a technology that combines Deep learning and Fake, using AI to replace one person's face into another person's photo/video. There are quite a few "criminal applications" around AI face-changing, including but not limited to: Attack face verification: Forge other people’s information and cash out directly from Alipay, WeChat wallet, or even loan software; · Creating fake pornographic images/videos: to defraud, blackmail or damage the reputation of others; Real - time face-changing calls: hacking accounts and defrauding relatives and friends of account owners; · Create false information: deceive politicians, judges, investors, etc.; Publishing fake news: inciting the public, causing chaos and panic, attacking business rivals, creating stock market turmoil, etc. Among these, facial recognition has the greatest relevance to ordinary people because it verifies a person's identity. Once this information is breached, the security of your assets and information will be threatened, and it is easy to be "naked" on the Internet. In addition to Deepfake technology, there are many other ways to interfere with facial recognition authentication, such as creating a "fake AI glasses": Source: Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In a classic paper in the field of image recognition, researchers designed a specific pair of "glasses" (middle picture) through mathematical calculations, which can allow a person (left picture) to be recognized as another person (right picture) by AI facial recognition. Another way to fool detection equipment is a 3D mask. Image source: businessinsider & kneron This is a product of the company Kneron. They announced at the end of 2019 that for WeChat and Alipay face-swiping payments and facial recognition gates at train stations, people can wear masks and disguise themselves as someone else to pass through. What’s special about AI face-changing? For ordinary people, there is no need to worry too much about the above technologies because they have various defects. For example, "fake AI glasses" are mainly aimed at static image recognition and cannot break through dynamic face recognition. "Face masks" need to be customized, are expensive, and have complex production processes. They are not cost-effective for ordinary people. But AI has changed everything, reducing the cost of counterfeiting to a bargain price. The shorter the video, the lower the pixel, the lower the real-time requirement (is it necessary to change the face in real time or can the video be produced before sending it out), the lower the vigilance of the deceived target, and the more available materials (that is, multi-angle photos and videos of the target person), the lower the cost of fraud. For example, the same person is speaking, but AI replaces the face of another person. The AI model that can be used right away can achieve this with NVIDIA GTX1080 Ti. The movement is relatively smooth, but the "plastic feel" is serious, and you can tell at a glance that it is not a real person video. Image source: Youtube @Matthias Niessner The AI model trained for a long time (hours or even days) by NVIDIA RTX 3090 Ti can produce a very realistic effect. Perhaps only after careful and repeated viewing will you find: Eh? Are the wrinkles on this person's forehead flashing just now? Image source: Youtube@Druuzil Tech & Games For "face recognition", deepfake is much cheaper; because breaking "face recognition" does not require the most "sophisticated" and labor-intensive Deepfake technology, but only a few seconds of video with average clarity. The more sophisticated videos produced after increasing the cost can be used for other fraudulent purposes. And this kind of fraud is a "profitable" business. After the equipment and algorithms are complete, fake videos can be mass-produced according to different situations and needs. What can ordinary people do? When AI face-changing enters the lives of ordinary people, what can we do to deal with the illegal activities related to it? How to “trick” the machine with face-changing? For scenarios such as face recognition, fake videos need to pass the automatic verification of the face recognition system, or "fool" the machine. The required video time is short (within a few seconds), the dynamics are simple (only a few fixed actions, and some systems even only recognize pictures), there is no real-time requirement, it is relatively simple from a technical point of view, and the recognition end is not under the control of real people. For such cases, what ordinary people can do is to protect their personal information (including facial information) and try to use a combination of multiple identity authentication methods. Password, fingerprint, mobile phone number, all possible verification methods can be selected. Although it will be more troublesome, it is equivalent to having more insurance, and the possibility of all being hacked at the same time is much smaller. If your credit card is unfortunately stolen, call the police immediately and cooperate with the police to track down the criminals and recover the illegal gains. At the same time, report it to the platform so that the platform can obtain more information to patch loopholes and upgrade the system. What to do if you want to “trick” a real person into changing his face? AI face-changing that deceives real people, such as blackmail and fraud against individuals, and false news against the general public, is more covert and diverse. Fundamentally speaking, with the advent of AI face-changing technology, ordinary people should be more cautious when facing image information and not blindly believe that "seeing is believing." If someone blackmails you with fake nude photos or indecent videos, do not respond, call the police directly. The target of AI face-changing in this type of crime is the victim, so as long as the person who sees the video knows that they have not taken such photos or videos, they can judge that the video is fake. The biggest difficulty is not to panic and not to hand over money to the criminals out of fear. If someone uses a fake video to defraud you, for example, a scammer steals your friend's number and uses AI to change your face to talk to you in real time to borrow money. At this time, multi-channel verification is very important, including various social platforms, emails, text messages, and phone calls. Don't send money without thinking just because the other party looks anxious. When watching news and videos online, check whether the news source is reliable. Especially Biden’s speech, Musk’s speech, there are too many training sets of these celebrities that can be used for machine learning, and it is easy to imitate high-precision videos. Tips for identifying Deepfakes There are also some tips that can help you better judge whether a video is an AI face-changing fake (if you turn on the 10-level beauty filter, these methods are not very reliable. After all, the 10-level beauty filter can also be regarded as a kind of AI face-changing): Focus on face shape Pay more attention to the size, shape, and jawline of the face, especially the way it moves, to see if it is consistent with the person himself. Concern for wrinkles After all, everyone's skin condition and wrinkle trends are different. It is difficult for the AI model to generate wrinkles that are exactly the same as the person's based on only some photos (and not necessarily recent photos). A person's skin is too smooth, has too many wrinkles, or the skin condition of the whole face is inconsistent (for example, there are many wrinkles on the forehead, but the cheeks are very smooth), or the age is inconsistent in a video (looking young at one moment and old at another moment), these may all be characteristics of a fake video. Focus on the background Check whether this background is the background this person usually uses, whether the connection between the background and the person is natural, whether the background itself is deformed, etc. Focus on light and shadow Videos generated by AI face-changing do not follow the physical rules of light and shadow in the real world, so facial shadows, reflections from glasses, etc. may betray fake videos. Pay attention to the position and size of facial features Videos forged by AI may show facial features that are sometimes large and sometimes small, and sometimes shifting in position. Focus on unique facial features If the person has moles, tattoos, or scars on their face, do they appear to be in the correct place? Focus on hair Does the volume and hairline look realistic? Do the edges of the hair look natural? Follow the trends For example, whether the frequency and method of blinking are normal, whether the movements of the eyebrows and mouth are like the person's usual appearance, whether there is any deformation when turning the head (especially turning 90 degrees to the side), and whether the obstruction is clearly visible when the face is blocked are all key points for identifying AI-forged videos. Some easy-to-give-away moves | Image source: metaphysic & VFXChris Ume In addition to the above theoretical knowledge, you can also try to go to this website ( As long as there is a high potential for illegal profits, criminals will not hesitate to increase their investment to produce more exquisite and "real" fake videos to deceive ordinary people, large companies, large institutions and even governments. Therefore, in addition to what individuals can do, the government and companies should also continue to improve and strengthen policies, regulations and technological construction, so that technological progress can truly benefit mankind, rather than becoming a criminal tool to harm others. References [1]http://epaper.ynet.com/html/2022-04/14/content_396257.htm?div=-1 [2]https://www.theverge.com/2019/12/13/21020575/china-facial-recognition-terminals-fooled-3d-mask-kneron-research-fallibility [3]Kneron's Edge AI & Facial Recognition Survey Pushes Forward Industry | Kneron – Full Stack Edge AI. (2020). Retrieved 25 October 2022, from https://www.kneron.com/news/blog/85/ [4]All it takes to fool facial recognition at airports and border crossings is a printed mask, researchers found. (2022). Retrieved 25 October 2022, from https://www.businessinsider.com/facial-recognition-fooled-with-mask-kneron-tests-2019-12 [5]To Uncover a Deepfake Video Call, Ask the Caller to Turn Sideways - Metaphysic.ai. (2022). Retrieved 25 October 2022, from https://metaphysic.ai/to-uncover-a-deepfake-video-call-ask-the-caller-to-turn-sideways/ [6]Detecting deepfakes by looking closely reveals a way to protect against them. (2022). Retrieved 25 October 2022, from https://phys.org/news/2019-06-deepfakes-reveals.html [7]Europol report finds deepfake technology could become staple tool for organized crime | Europol. (2022). Retrieved 25 October 2022, from https://www.europol.europa.eu/media-press/newsroom/news/europol-report-finds-deepfake-technology-could-become-staple-tool-for-organized-crime [8]Project Overview ‹ Detect DeepFakes: How to counteract misinformation created by AI – MIT Media Lab. (2022). Retrieved 25 October 2022, from https://www.media.mit.edu/projects/detect-fakes/overview/ [9]Sharif, M., Bhagavatula, S., Bauer, L., & Reiter, MK (2016, October). Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 acm sigsac conference on computer and communications security (pp. 1528-1540). [10]Thies, J., Zollhöfer, M., Theobalt, C., Stamminger, M., & Nießner, M. (2018). Headon: Real-time reenactment of human portrait videos. ACM Transactions on Graphics (TOG), 37(4), 1-13. [11]Li, Y., Yang, X., Wu, B., & Lyu, S. (2019). Hiding faces in plain sight: Disrupting AI face synthesis with adversarial perturbations. arXiv preprint arXiv:1906.09288. [12]Westerlund, M. (2019). The emergence of deepfake technology: A review. Technology Innovation Management Review, 9(11). [13]Mirsky, Y., & Lee, W. (2021). The creation and detection of deepfakes: A survey. ACM Computing Surveys (CSUR), 54(1), 1-41. [14]Li, C., Wang, L., Ji, S., Zhang, X., Xi, Z., Guo, S., & Wang, T. (2022). Seeing is living? rethinking the security of facial liveness verification in the deepfake era. CoRR abs/2202.10673. [15]Nguyen, TT, Nguyen, QVH, Nguyen, DT, Nguyen, DT, Huynh-The, T., Nahavandi, S., ... & Nguyen, CM (2022). Deep learning for deepfakes creation and detection: A survey. Computer Vision and Image Understanding, 223, 103525. Author: Luo Wan Editor: Emeria, You Shiyou Guokr (ID: Guokr42) If you need to reprint, please contact [email protected] Welcome to forward to your circle of friends Source: Guokr |
<<: How can you use scientific means to force yourself to have a better mood?
Audit expert: Chen Yu Paleontological restoration...
This article is based on the gaming industry, but...
The next-day retention rate of the angel user gro...
According to statistics and analysis by the China...
1. All-round and three-dimensional teaching: from...
Just now, affected by the "double reduction&...
Recently, the US research company IDC released th...
Today, Dr. Jeff Jaffe, CEO of the World Wide Web ...
Introduction to the content of the training course...
Scientists at MIT have published a "knot-tyi...
The purpose of writing this article is just to su...
A newbie's talk: In addition to the content, ...
Hundreds of millions of years ago A group of alga...
[April 10, 2021] Emma and Shenzhen Satellite TV j...
Author: Kan Shifeng, deputy chief physician, Firs...