A scammer in China used artificial intelligence to pose as a businessman’s trusted friend and convince him to hand over millions of yuan, authorities have said.
The victim, surnamed Guo, received a video call last month from a person who looked and sounded like a close friend.
But the caller was actually a con artist “using smart AI technology to change their face” and voice, according to an article published Monday by a media portal associated with the government in the southern city of Fuzhou.
The scammer was “masquerading as (Guo’s) good friend and perpetrating fraud”, the article said.
Guo was persuaded to transfer 4.3 million yuan ($609,000) after the fraudster claimed another friend needed the money to come from a company bank account to pay the guarantee on a public tender.
The con artist asked for Guo’s personal bank account number and then claimed an equivalent sum had been wired to that account, sending him a screenshot of a fraudulent payment record.
Without checking that he had received the money, Guo sent two payments from his company account totalling the amount requested.
“At the time, I verified the face and voice of the person video-calling me, so I let down my guard,” the article quoted Guo as saying.
He only realised his mistake after messaging the friend whose identity had been stolen, who had no knowledge of the transaction.
Guo alerted police, who notified a bank in another city not to proceed with the transfers, and he managed to recover 3.4 million yuan, the article said.
It added that efforts to claw back the remaining funds were ongoing but it did not identify the perpetrators of the scheme.
The potential pitfalls of groundbreaking AI technology have received heightened attention since US-based company OpenAI in November launched ChatGPT, a chatbot that mimics human speech.
China has announced ambitious plans to become a global AI leader by 2030, and a slew of tech firms including Alibaba, JD.com, NetEase and TikTok parent ByteDance have rushed to develop similar products.
ChatGPT is unavailable in China, but the American software is acquiring a base of Chinese users who use virtual private networks to gain access to it for writing essays and cramming for exams.
But it is also being used for more nefarious purposes.
This month police in the northwestern province of Gansu said “coercive measures” had been taken against a man who used ChatGPT to create a fake news article about a deadly bus crash that was spread widely on social media.
A law regulating deepfakes, which came into effect in January, bans the use of the technology to produce, publish or transmit false news.
And a draft law proposed last month by Beijing’s internet regulator would require all new AI products to undergo a “security assessment” before being released to the public.