AI Face Changing Fraud: Defending Money Bag and Keeping the Legal Bottom Line

2023 05/30

Recently, multiple news reports of using "AI face changing" to commit telecommunications fraud have continued to make headlines, with victims ranging from all walks of life, affecting a wide range of people and having a negative social impact. Most criminals use AI technology to synthesize faces and sounds for camouflage, impersonating specific characters' identities, and gaining the trust of victims through video calls, thus fraudulently obtaining huge amounts of property. This type of "face changing" and onomatopoeia is overwhelming, causing public discussion and anxiety.


Telecom fraud high-end bureau? Teach you how to protect the money bag


Currently, telecommunications network fraud has become the type of crime with the highest incidence, the fastest rise, the widest coverage, and the strongest response from the people. In May 2022, the Ministry of Public Security announced five types of high incidence telecommunications network fraud cases, namely rebate, false investment and wealth management, false online loans, impersonation of customer service, and impersonation of public security organs. For such fraudulent techniques, it is usually recommended to avoid temptation, identity verification, and cautious transfer when identifying and preventing fraud, in order to remain unchanged and adapt to changes.


With the vigorous development of China's anti fraud work, most people have a preliminary understanding and prevention awareness of telecommunications fraud. However, with the rapid development of technology, the threshold for the application of synthesis technology has been lowered, and deep synthesis products and services have gradually increased, making it possible for criminals to take advantage of various criminal methods. Even with some vigilance, people still hold a high level of trust in dynamic information content such as videos and sounds. Faced with high-end scams using emerging technologies such as "AI face changing" and "AI voice changing", "seeing is no longer believing" makes it difficult to distinguish true from false.


Case 1


In 2020, criminals embezzled photos and video clips of well-known male actors, created short videos through editing and dubbing to attract middle-aged and elderly women, and then impersonated celebrities through private messages, AI face changing live broadcasts, AI onomatopoeia interactions, and other means to deceive them of their trust and emotions, and carried out fraud.


Case 2


In 2021, Hefei police arrested a suspected gang that used AI technology to forge dynamic videos of human faces, and found more than ten Gs of citizen face data in the suspect's computer, including photos of the front and back of ID cards, photos of holding ID cards, selfie, etc. These sets of photos are called "materials", and the person selling the photos is called "material merchants". These "materials" have been sold multiple times online, but the owner of the "materials" is unaware. The suspect uses AI technology to use these "materials" to forge facial dynamic videos. Due to the simple production, the price of a video is only 2 to 10 yuan, and "customers" often buy hundreds or thousands of them, with huge profit space.


Case 3


In February 2022, Mr. Chen from Zhejiang reported to the police that he had been scammed 50000 yuan by a "friend". After verification by the police, it was found that illegal individuals used AI technology to synthesize videos posted by Mr. Chen's friend on social media platforms, creating the illusion of Mr. Chen chatting with his "friend" in videos to gain their trust and thus commit fraud.


Case 4


In April 2023, Mr. Guo, the legal representative of a company in Fujian, received a WeChat video from a friend stating that his friend was bidding in another city and needed a deposit of 4.3 million yuan. He wanted to borrow Mr. Guo's company account to transfer funds. Mr. Guo, based on his trust in his friend and video chat, had already verified his identity. Without confirming whether the money had been credited, he transferred the money to the so-called friend's bank card. He later called his friend's phone number to realize that he had been scammed. The fraudster used intelligent AI face changing and onomatopoeia technology to pretend to be a friend and commit fraud against Mr. Guo. Fortunately, Mr. Guo reported to the police in a timely manner, and the police quickly initiated a payment suspension mechanism. With the assistance of the bank, they successfully intercepted over 3.3 million yuan, and the remaining amount is still being recovered.


These shocking cases not only cause distress and economic losses to the victims, but also to some extent trigger panic among the public. However, from the technical feasibility of using artificial intelligence for fraud and the infrastructure situation of China's anti fraud, the current "AI face changing" fraud has not reached a widespread state, and fundamentally has not yet separated from the "remote" and "contactless" characteristics of telecommunications network fraud.


The most effective way to prevent fraud is not to believe, no matter how the other party "exchanges", it is "not to listen, not to believe, not to transfer". On this basis, our lawyer will combine industry experts' technical analysis and relevant case review to teach you a few more ways to protect your money bag:


1. Protecting Personal Information Security


Do not easily provide personal biological information such as faces, fingerprints, voiceprints, irises, and photos and videos of handheld documents to others or online platforms. Do not excessively disclose or share materials such as motion pictures and videos that may be used for AI training (especially pay attention not to communicate deeply when receiving suspected fraud calls to avoid the other party obtaining audio "raw materials"), and do not fill them out casually Disclosing too much personal real information (avoiding being portrayed as a "precise hunting" by users).


2. Multiple verification of identity information


Telecom fraud often uses unfamiliar numbers to send text messages or make phone calls, some of which can be identified and intercepted through mobile apps or telecom operators. However, criminals may forge caller ID displays or steal social accounts. Even familiar contact information needs to be verified through phone, video, and other means. To prevent "AI face changing", authenticity can be identified by observing whether the texture features of the video page are regular, whether the physiological characteristics of the video characters are normal, and whether the continuous actions of the video are smooth. Multiple verifications can also be conducted through face-to-face meetings, third-party confirmation, and setting up private verification methods in advance.


3. Delaying time and transferring funds cautiously


The ultimate purpose of fraud is to defraud property. When involving money transactions, it is necessary to confirm the identity of the other party twice. When transferring funds, it is best to set the arrival time to at least 2 hours, reserve time for re verification, and use payment methods such as bank transfers that are easy to track and intercept. When transferring funds on business, it is recommended to verify the process (such as payment basis and approval certificate) and strictly follow the company's financial system; If the other party requests payment to a third party during private transfers, they should be vigilant and make a comprehensive judgment after confirming the recipient's identity and personal information.


4. Report any suspicious cases promptly


When receiving suspected fraud phone calls or videos, it is recommended to promptly fix the evidence through recording, screen recording, and other means. On the one hand, it is convenient for verification and confrontation, and on the other hand, it is convenient to provide clues when reporting or reporting in the future. If you encounter any suspicious situation, you can report it to the Anti Fraud column on 12321. cn. If there is any financial loss, please call 96110 as soon as possible to report it to the police.

Technology has no good or evil, and law has a bottom line


Technology has no good or evil, the key lies in how it is used. The rational use of AI technology not only enriches the online content ecosystem, but also brings convenience and diverse experiences to people's lives. However, once illegally used by malicious individuals, even professionals may need to rely on technical means to accurately identify and prevent fraud, which significantly increases the difficulty for ordinary people.


Behind the frequent occurrence of AI face changing scams is technological abuse, which violates the principle of technological development towards the good and challenges the bottom line of law and public order. Our country's law has a very clear attitude towards such suspected infringements, violations, and criminal offenses.


Regardless of the type of telecommunications fraud, its essence is the fraud carried out by criminals through various technical means. Whether it is the perpetrator of direct telecommunications network fraud, or personnel who provide technical support, financial settlement, publicity and promotion assistance for fraud activities, they can be held criminally responsible for suspected fraud.


Key Bar Links (Swipe Down to View)


1. Anti Telecom Network Fraud Law of the People's Republic of China


Article 2: The term "telecommunications network fraud" as used in this Law refers to the act of defrauding public and private property through remote or non-contact means, using telecommunications network technology for the purpose of illegal possession.


Article 38: Those who organize, plan, implement, or participate in telecommunications network fraud activities or provide assistance for telecommunications network fraud activities, which constitutes a crime, shall be investigated for criminal responsibility in accordance with the law.


If the act mentioned in the preceding paragraph does not constitute a crime, the public security organ shall impose a detention of not less than ten days but not more than fifteen days; Confiscate the illegal gains and impose a fine of not less than one time but not more than ten times the illegal gains. If there are no illegal gains or the illegal gains are less than 10000 yuan, impose a fine of not more than 100000 yuan.


2. The Criminal Law of the People's Republic of China


Article 266: Whoever swindles public or private property and the amount involved is relatively large shall be sentenced to fixed-term imprisonment of not more than three years, criminal detention or public surveillance, and shall also, or shall only, be fined; If the amount is huge or there are other serious circumstances, he shall be sentenced to fixed-term imprisonment of not less than three years but not more than ten years and shall also be fined; If the amount is particularly huge or there are other particularly serious circumstances, he shall be sentenced to fixed-term imprisonment of not less than ten years or life imprisonment, and shall also be fined or have his property confiscated. If there are other provisions in this Law, they shall prevail.


3. Opinions of the Supreme People's Court, the Supreme People's Procuratorate, and the Ministry of Public Security on Several Issues Concerning the Application of Law in Handling Criminal Cases such as Telecommunications Network Fraud


According to Article 1 of the Interpretation of the Supreme People's Court and the Supreme People's Procuratorate on Several Issues Concerning the Specific Application of Law in Handling Criminal Cases of Fraud, the use of telecommunications network technology to commit fraud, with a value of more than 3000 yuan, more than 30000 yuan, or more than 500000 yuan, in the first paragraph of Article 2, It should be recognized as "relatively large amount", "huge amount", and "particularly huge amount" as stipulated in Article 266 of the Criminal Law.


If a telecommunications network fraud is committed multiple times within two years without being dealt with, and the accumulated amount of fraud is calculated to constitute a crime, it shall be convicted and punished in accordance with the law.


In the face of the chaotic application of deep synthesis technology that urgently needs to be addressed, China has also successively introduced relevant laws and regulations in recent years to carry out source and comprehensive governance of AI fraud, such as:

1. Clearly define that citizens' portrait rights and personal information rights are protected by law. The Civil Code regulates "the prohibition of using information technology to forge or infringe upon the portrait rights of others" as a typical form of infringement of portrait rights.


2. The "Regulations on the Management of Network Audio and Video Information Services" require that "non real audio and video information" should be prominently identified, and false news information should not be produced, published, or disseminated using new technologies and applications based on deep learning, virtual reality, etc.


3. The newly implemented Regulations on the Management of Deep Synthesis of Internet Information Services, as the first special department regulation for deep synthesis service governance in China, clarifies the specifications for deep synthesis data and technology management, and strengthens the main responsibilities of deep synthesis service providers and technology supporters.


4. In order to promote the healthy development and standardized application of generative artificial intelligence, on April 11 this year, the National Cyberspace Administration drafted the "Management Measures for Generative Artificial Intelligence Services (Draft for Soliciting Opinions)" and publicly solicited opinions from the society. For the first time, normative policies were issued in response to the current booming generative AI industry, proposing specific requirements for data compliance and the legality of generated content.


Write at the end


Every technological advancement will lead to a major change in the field of legal norms. In the era of internet big data, deep synthesis technology is being widely popularized, with various types of videos, live broadcasts, software teaching, etc. emerging one after another. The technical threshold is lowered, and usage scenarios are generalized. However, whether it is for self entertainment, commercial profit, or technological research and development, it is necessary to maintain the legal bottom line.

For AI users, "AI face changing", "AI cross dressing", and "one click head changing" can certainly provide them with a novel experience, but on the one hand, they should pay attention to protecting their personal information, especially when using relevant software "click authorization", they should carefully review the relevant terms and protect their rights in accordance with the law; On the one hand, it is not allowed to use others' portraits or works arbitrarily or improperly. Otherwise, it may bear corresponding legal responsibilities for infringing on the legitimate rights and interests of others' portraits, reputation rights, intellectual property rights, etc. If there are malicious defamation, pornography, false rumors, real name authentication, and other illegal behaviors, it may also be suspected of insulting, defaming, fabricating, intentionally spreading false information, producing, copying, publishing, selling Crimes such as the crime of spreading obscene materials for profit, the crime of spreading obscene materials, and the crime of illegally invading computer information systems.


For AI service providers, they should fulfill their obligations in accordance with the law, implement the main responsibility of information security, establish and improve management systems such as user registration, algorithm mechanism review, technology ethics review, information release review, data security, personal information protection, anti telecommunications network fraud, emergency response, etc., and provide necessary prompts and assistance for service technology supporters and users to assume information security obligations.


Technology is a double-edged sword, but technology neutrality does not necessarily mean value neutrality. The application of AI technology should still be subject to legal regulations and ethical constraints. In the face of illegal and abusive issues arising from technological development, every participant should comply with the requirements of laws and regulations, respect social morality, public order and good customs, and respect the legitimate rights and interests of others. Protect our money bags, uphold the legal bottom line and technical red line, and work together to create a safer, healthier, and more orderly online environment.