AI Scam Alert! It’s a Trap!

The Impact of Artificial Intelligence on Everyday Life

With the continuous development of artificial intelligence, generative AI is reshaping daily life at an unprecedented speed. However, along with this development comes new scams and schemes that challenge people’s awareness of risk prevention.

Is 9.9 Using ChatGPT?

It’s Actually “Clone AI”

In February 2023, a public account named “ChatGPT Online” caught the attention of the market supervision department in Xuhui District, Shanghai.

This account, which had a profile picture highly similar to the official logo of the original ChatGPT development company, offered users a brief free trial before requiring membership registration for a fee. For 9.9 yuan, users could have 20 conversations; as the number of conversations increased, so did the charges.

Upon investigation, law enforcement officers discovered that the operating company of this public account had no affiliation with the actual ChatGPT development company. The so-called “ChatGPT Online” was not the “ChatGPT” product itself.

Instances of such “counterfeit AI” are not uncommon. In 2023, just before the launch of Baidu’s “Wenxin Yiyuan,” numerous social media accounts appeared bearing the name “Wenxin Yiyuan.” Subsequently, Baidu officially debunked this by stating that “Wenxin Yiyuan” had not yet registered any social accounts.

The establishment of an open-source ecosystem for AI large models accelerates AI development but also makes generative AI more susceptible to misuse. According to Shen Fujun, Executive Council Member of the Chinese Society of Administrative Law and Professor at East China University of Political Science and Law, considering the lag in legislation, regulatory authorities need to pay closer attention to the impact of new technological developments on the market side. They should intensify pre-market supervision, make full use of existing legal resources for effective regulation, and avoid allowing new issues to hide in a “regulatory blind spot.”

Beware of “AI Face-Swapping” Scams

Appearances Can Be Deceptive

In April 2023, Mr. Guo, the legal representative of a technology company in Fujian Province, received a WeChat video call from a “friend” who claimed to urgently need a 4.3 million yuan margin for project bidding and wanted to use Mr. Guo’s company account for a transaction. Trusting his “friend” due to previous video calls, Mr. Guo transferred a total of 4.3 million yuan to the other party. It was only when he tried to contact his friend again that Mr. Guo realized he had been deceived. Fortunately, with the police’s help, he managed to intercept and recover 3.3684 million yuan.

According to Mr. Guo, his trust in the other party’s identity was based on confirming their face and voice during the video call. Furthermore, the criminals accurately understood the relationship between Mr. Guo and his friend, successfully stole his friend’s WeChat account, and carried out the scam, which was truly chilling.

Many industry experts suggest that with the exploration and emergence of multimodal artificial intelligence such as Wensheng’s video large model Sora, people may fall into the trap of “not believing what they see.” “Banks and other institutions have always used real-time video as one of the means to verify identity. Now, its reliability will face significant challenges,” said Shen Fujun. “With the continuous upgrade of artificial intelligence technology, such illegal activities may evolve into more forms.”

From a consumer perspective, Shen Fujun recommends raising awareness of personal information protection among the general public to prevent privacy leaks. “Whether online or on social apps, try to avoid exposing too much personal information. When it comes to transactions involving transfers, ask for identity information from multiple angles and repeatedly verify whether the other party is genuine.”

Stay Alert to “AI Rumor-Mongering”

Be Cautious of “Traffic” Turning into “Toxicity”

In June 2023, a video titled “Major Fire and Explosions in Zhejiang Industrial Park, Witnesses Claim!” circulated on the internet, attracting attention from netizens. However, it was later confirmed to be a rumor by relevant authorities. Investigations revealed that the individual responsible, in order to increase followers to their account and gain more benefits, illegally purchased AI video generation software through unauthorized channels. This individual used AI to automatically create videos on hot topics and uploaded them to various popular video platforms.

By the time of the incident, the person had posted over 20 false videos, involving multiple provinces and cities, including Zhejiang, Hunan, Shanghai, and Sichuan, accumulating more than 1.67 million views. Currently, the Shaoxing Shangyu Court in Zhejiang has held a trial and pronounced immediate sentencing for both defendants.

The widespread use of technology has propelled the prosperity of the self-media industry. However, some internet users resort to utilizing technological means to create false videos for traffic, spreading “AI rumors” on the internet. This not only poses a serious challenge to cybersecurity but also severely disrupts social order.

In 2023, the Supreme People’s Court, the Supreme People’s Procuratorate, and the Ministry of Public Security jointly issued guidelines on “Punishing Online Violent Crimes in Accordance with the Law,” stipulating heavier penalties for “publishing illegal information using generative AI technologies such as ‘deep synthesis.'”

Source: Shanghai Yangpu

Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.