The era of "seeing is not necessarily believing" has arrived. Beware of online fraud using AI face-changing

The era of "seeing is not necessarily believing" has arrived. Beware of online fraud using AI face-changing

With the rapid development of artificial intelligence technology (AI), AI has empowered all walks of life while also bringing unprecedented security challenges. Although AI can assist us in processing various types of information and improve work efficiency, security risks such as cyber attacks and data leaks are becoming increasingly prominent, and problems such as excessive collection, discriminatory algorithms, and abuse of deep fakes are more likely to bring great risks to the operation of the social economy. In recent times, some criminals have begun to use AI technology to fuse other people's faces and voices to create very realistic synthetic images to implement new types of online fraud. When AI becomes a tool for fraud, this type of high-tech scam is often hard to guard against and will cause great losses to the victim in a short period of time.

“AI face-changing” is being abused. Technology must not become an accomplice of criminals

AI opens Pandora's box

In the past two years, generative artificial intelligence products represented by ChatGPT, "Wenxin Yiyan", Sora, etc. have become extremely popular. Their technical characteristics of learning from large-scale data sets based on complex algorithms, models and rules to create new original content have completely subverted human imagination of technology.

With the continuous advancement of AI technology, its advantages in information processing and interaction have become increasingly prominent, the authenticity and efficiency of generated content have become stronger, and the phenomenon of "false and real" has become more common. However, due to the lack of effective supervision and constraints, the risk of abuse and even fraud is not groundless. Criminal groups have even begun to use generative artificial intelligence chatbots to conduct online fraud, covering illegal activities with a "technological coat". AI technology has quietly opened the "Pandora's box".

Not long ago, in Mianyang, Sichuan, Mr. Wang suddenly received a video call from the leader of his supervisory unit, saying that his friend wanted to borrow more than 400,000 yuan from Mr. Wang's company account. Based on his trust in the leader and the fact that his identity had been verified through video chat, Mr. Wang immediately transferred more than 400,000 yuan to the designated account, but later found out that he had been deceived. In fact, the scammer used intelligent AI face-changing and voice-onomatopoeia technology to pretend to be the leader and defrauded him.

In February this year, the Hong Kong police disclosed an AI "multi-person face-changing" fraud case involving a sum of up to HK$200 million. In this case, an employee of a multinational company's Hong Kong branch was invited to participate in a multi-person video conference initiated by the chief financial officer of the headquarters. In accordance with the requirements, they transferred HK$200 million to five local bank accounts several times, and then inquired with the headquarters to find out that they had been deceived. The police investigation found that in this case, only the victim was a "real person" in the so-called video conference, and the rest of the so-called participants were all fraudsters who had their faces changed by AI.

Under the "AI face-changing" technology, everyone may become a victim

Not technically difficult

The popularity of AI technology has blurred the boundaries between virtual and reality, making it more difficult to distinguish true and false information. The occurrence of various cases has shown that AI fraud is no longer far away from us. Criminals usually change their faces to pretend to be relatives, friends, colleagues, etc., and ask victims for money. The video method seems to be credible, which makes victims relax their vigilance. In this environment, not only ordinary people are easily deceived, but even professional institutions are not immune.

In fact, the technical logic behind AI face-changing is not complicated. The process mainly includes several key steps such as face recognition tracking, facial feature extraction, face transformation fusion, background environment rendering, image and audio synthesis, etc.

The more detailed the information is, the more accurate the digital image will be. The core of this process consists of three parts: first, using deep learning algorithms to accurately identify the face image in the video and extract key facial features such as eyes, nose, and mouth; second, matching, replacing, and fusing these features with the target face image; and finally, by rendering the background environment and adding synthesized sound, a highly realistic fake "face-swapped" video is generated.

By searching the Internet, you can find many so-called "face-changing software companies" that claim to provide 24-hour services under the banner of entertainment. It is understood that the current cost of producing a "show face and talk" video is as low as tens of yuan. You only need to provide a frontal photo to generate a video. Not only will the eyes, mouth, and head move, but if you provide text, the lip movements can also be matched. It can be completed in one night. Some businesses claim that it only costs about 30 yuan to generate a 1-minute video from a photo. If the video is short and the quality requirements are not high, even a few yuan per video can be used.

The double-edged sword nature of artificial intelligence technology has triggered a profound reflection on network security in all sectors of society

What should we do?

This situation highlights the double-edged sword nature of artificial intelligence technology and has also triggered a deep reflection on cybersecurity and AI supervision in all sectors of society. In the face of new technological advances, we must establish a new regulatory system that is adapted to them.

In this era where “seeing is not necessarily believing”, how to fully enjoy the convenience brought by artificial intelligence technology while effectively preventing security risks is the key for mankind to embrace future technologies, and it is also the direction and driving force for the continuous improvement of relevant laws.

The AI ​​fraud trap that is hard to prevent will become the focus of law enforcement. How to accurately identify, effectively prevent and comprehensively combat related illegal and criminal activities requires systematic cooperation and innovation and improvement of legislation, law enforcement and justice. To this end, the government should formulate and improve relevant laws and regulations, clarify the scope and limitations of the use of AI technology, and the legal liability for illegal activities. Last year, my country issued the "Interim Measures for the Management of Generative Artificial Intelligence Services", which clarified the obligation to protect personal information. This is a good start.

In addition to legal considerations, public education and awareness raising are also crucial. Faced with new scams using AI technology, the general public needs to be vigilant, strengthen prevention, and be the first person responsible for the security of personal information.

Strengthen the awareness of personal information protection and prevent information leakage. Do not easily provide personal biometric information such as face and fingerprints to others; do not easily disclose your ID card, bank card, verification code and other information; do not store ID card and bank card photos directly in your mobile phone for the sake of convenience; do not click on unfamiliar links and do not scan unknown QR codes at will; manage your own circle of friends, and do not easily enable mobile phone screen sharing to strangers; for infrequently used apps, it is recommended to cancel your personal account before uninstalling.

In the AI ​​era, text, sound, images and videos may all be deeply synthesized. In typical scenarios such as remittances and fund transactions, do not directly transfer money through a single communication channel without verification. In this era of highly developed technology, we still give the least technologically advanced suggestion - verify and confirm again through additional communication methods such as calling back the other party's mobile phone number.

<<:  If smart drugs really exist, would you dare to take them?

>>:  Today's Grain in Ear|What is the "wheat awn" in "needle against wheat awn"? It can actually dig holes by itself

Recommend

Is the mantis shrimp in your mouth actually a "martial arts master"?

When it comes to mantis shrimp, many people will ...

Illustrations | Home and business gas safety knowledge cards

Gas leaks can easily lead to explosions How to us...

Two major channels for Tik Tok to get free traffic!

There is always not enough traffic for the video,...

Fission activity gameplay and logic

Nowadays, more and more mobile Internet products,...

How to create a short video with over 1 million views?

5 minutes of video + text will help you understan...

How does a sports watch know your altitude?

"How thick the earth is, how high the sky is...

Young stroke, how far are we from it?

Author: He Yanbo, deputy chief physician of Beiji...

Talking about the development of large iOS projects

The title is a bit scary, please don't be afr...

CES2015: Entering the 64-bit era, Lenovo releases Vibe X2 Pro

At the CES 2015 conference, Lenovo released the u...

Does your nose itch when spring comes? Interesting facts about pollen

Spring Blossoms Itchy nose? Cool facts about poll...

A total of 34,162 Range Rover Evoque/Discovery Sport vehicles were recalled

Recently, Jaguar Land Rover Automotive Trading (S...