Deepfake Fraud Case in Hong Kong: Analysis of Risks and Detection Methods

According to Radio Television Hong Kong (RTHK), an employee of a multinational company's Hong Kong branch was scammed into transferring over HK$200 million (over US$25 million) after a video conference call.

The scam began in January 2024 when the victim received an email from someone claiming to be the company's CFO. The victim dismissed the email as a phishing attempt2024020601.png.

Later, the fraudster impersonating the CFO organized a video conference, inviting the victim and several other "colleagues" working in other locations to participate.

In the video, the victim found that the CFO and several colleagues looked, sounded, and spoke in a similar manner to the real people he knew, which dispelled his earlier suspicions. He believed that everyone in the early email and video conference was real.

Following the meeting, the victim, under the instruction of the CFO impersonator, made 15 transactions within a week totaling HK$200 million (over US$25 million). During this time, the fraudster kept in touch with the victim via WhatsApp, email, and one-on-one video calls, and told the victim that he had contacted two other colleagues in the branch using the same method.

After the fact, the victim reported the entire transaction process to the company's management and realized that he had been scammed.

The police found that this was a carefully planned scam. The fraudster searched and downloaded photos and videos of the company's CFO and several other people from social media and video platforms, and then used deepfake technology to synthesize different voices and integrate them into the fake video clips. Except for the victim himself, all the other participants in the video conference were fake. The defrauded funds were sent to five local bank accounts and then quickly dispersed.

The Hong Kong police also said that between July and September 2023, they discovered more than 20 cases of AI-based deepfakes being used for loan applications and bank account registration.

Hong Kong media reported that this is the first case in the city involving such a large amount of money in a deepf

New Fraud: Challenges Brought by Deepfakes

The Rise of Deepfakes。With the popularity of artificial intelligence technology, AI tools are also abused, which brings new threats. Even criminals and ordinary people can use AI tools to generate deepfake images, videos, and voices for dissemination and fraud. A recent report by KPMG showed that the number of deepfake videos available online has increased by 900% year-on-year. 2024020602.png High-Profile Victims。Elon Musk, two BBC presenters, YouTube personality MrBeast, and pop star Taylor Swift have all been victims of identity fraud in scam videos.

Impact on the Banking Industry。The banking industry is the main target of this type of identity fraud. 92% of companies in the industry see synthetic fraud as a real threat, and 49% of companies have recently encountered such scams. Fraudsters not only forge information, voices, videos, and images, but also combine real and fake identity information to create entirely new artificial identities for the purpose of opening bank accounts or making fraudulent purchases.

Cross-Industry Threat。Deep synthetic fraud is not limited to the financial sector, but spans all industries. 46% of organizations globally have experienced synthetic identity fraud in the past year.

Overconfidence in Detection。However, 52% of respondents believe they can detect deepfake videos. This sentiment reflects consumer overconfidence, but in fact deepfake technology has developed to the point where it is undetectable to the naked eye. Without special training, it can be very difficult for ordinary people to identify AI-generated fake identities. As the quality of deepfakes improves, so does the difficulty of detection.ake fraud case.

Effective Detection: How to Detect Deepfakes

The Battle Against Deepfakes:The rise of deepfake technology has created new opportunities for fraud and posed significant challenges for detection and identification. The difficulty of distinguishing deepfakes from real content with the naked eye and the limitations of traditional detection tools have made deepfake fraud a major concern in the cybersecurity landscape.

Multi-dimensional Strategies to Decipher Deepfakes:With technological advancements, various methods for detecting deepfakes have emerged, providing powerful tools for preventing fraud. Enterprises and individuals can adopt the following strategies for identification:

Visual Recognition: Capturing Subtle Inconsistencies:While deepfake technology is sophisticated, it can still produce subtle inconsistencies in facial features. For example, micro-expressions, eye movements, and the interaction between hair and facial features may exhibit abnormalities. Additionally, lighting and shadowing are often challenging aspects of deepfakes, which may result in inconsistencies in light source, missing shadows, or misaligned reflections.

Auditory Recognition: Distinguishing Voice Anomalies:Deepfakes can imitate voices, but may not perfectly replicate tone, rhythm, and other subtle details. Careful listening may reveal unnatural voice characteristics. Additionally, voice analysis software can assist in identifying voice anomalies, such as changes in pitch, timbre, and speaking speed.

Document Recognition: Analyzing Document Details:For document-based deepfakes, automated document verification systems can analyze inconsistencies in fonts, layout, and other aspects to determine authenticity.

AI-based Automatic Recognition: Intelligent Countermeasures against Deepfakes:With the continuous evolution of fraud techniques, machine learning has become a critical tool in the fight against deepfakes. By leveraging large data models, AI can rapidly analyze massive amounts of video and audio data to identify anomalies that may be imperceptible to the human eye. Furthermore, machine learning models can learn the characteristic patterns of deepfake generation algorithms, enabling precise identification of forged content. Critically, machine learning models can be continuously trained and refined, ensuring real-time iterative evolution of detection capabilities.

Security: How to Ensure Enterprise and Personal Security

In an era where deepfakes are increasingly being used for financial fraud, it is becoming more and more important to protect against them. 2024020504.png

  1. Increase biometric and liveness verification.Adopt liveness verification methods that utilize advanced biometrics, such as facial recognition, combined with motion analysis and infrared scanning, to detect deepfake videos to a certain extent.

  2. Increase digital signature verification.Digital signatures and blockchain ledgers are unique and can be used to track the source of an action and flag it for review.

  3. Increase device and operation verification.Based on devices that have been previously authenticated or identified, compare device information, geographic location, and behavior operations to identify and prevent

Dingxiang Device Fingerprinting refers to the technology of uniquely identifying and recognizing each device by collecting and analyzing the hardware, software, and behavior data of the device. It can identify virtual machines, proxy servers, emulators, and other maliciously controlled devices, and analyze whether the device has multiple accounts logged in, whether it frequently changes IP addresses, and whether it frequently changes device attributes. It can also help track and identify the activities of fraudsters. By recording and comparing Device Fingerprinting, legitimate users and potential fraudulent behavior can be distinguished.

  1. Increase account verification frequency.Account login from a different location, device change, mobile phone number change, dormant account suddenly active, etc., need to strengthen frequent verification. In addition, continuous identity verification during a session is critical, maintaining persistent checks to ensure that the user's identity remains consistent during use.

Dingxiang atbCAPTCHA can quickly and accurately distinguish whether the operator is a human or a machine, accurately identify fraudulent behavior, and monitor and intercept abnormal behavior in real time. When users perform operations such as login and registration, they can quickly complete identity verification without the need for. Cumbersome operation and verification code recognition. This not only improves the convenience and fluency of the user experience, but also greatly reduces the risks caused by human operational errors. Since Dingxiang atbCAPTCHA is based on AIGC technology, it can prevent brute force cracking, automated attacks, and phishing attacks, effectively preventing unauthorized access, account theft, and malicious operations.

  1. Increase anti-fraud system verification.Anti-fraud systems that combine manual review with AI technology can help enterprises improve their anti-fraud capabilities.

For example, the average processing speed of Dingxiang Dinsight's daily risk control strategy is within 100 milliseconds. It aggregates data engines, integrates expert strategies, and supports parallel monitoring and replacement and upgrade of existing risk control processes. It can also build dedicated risk control platforms for new businesses. It aggregates anti-fraud and risk control data, supports the configuration and precipitation of data from multiple parties, can be configured graphically, and can be quickly applied to complex strategies and models. It can realize the mechanism of self-performance monitoring and self-iteration of risk control based on the accumulation of experience of mature indicators, strategies, and models, as well as deep learning technology. It integrates expert strategies and is based on system + data access + indicator library + strategy system + expert implementation of actual combat. It supports parallel monitoring and replacement and upgrade of existing risk control processes, and can also build dedicated risk control platforms for new businesses.

  1. Reduce the sharing of sensitive information on social media.Reduce or avoid sharing sensitive information such as account services, family members, transportation, and work positions on social media to prevent fraudsters from stealing and downloading it, and then deep forge pictures and sounds, Then they forged their identities.

  2. Strengthen public safety training and education.It is crucial to continuously educate the public about deepfake technology and its associated risks. By conducting simulations and mimicking phishing and deepfake attacks, encouraging the public to be vigilant and quickly report suspicious situations, the organization's ability to detect and respond to deepfake threats can also be significantly improved.

Keep up with the latest developments in AI and deepfake technology.Technology is constantly evolving, and new frauds are constantly emerging. Try to keep up with the latest developments in AI and deepfake technology to adjust your security measures accordingly.

Copyright © 2024 AISECURIUS, Inc. All rights reserved
Hi! We are glad to have you here! Before you start visiting our Site, please note that for the best user experience, we use Cookies. By continuing to browse our Site, you consent to the collection, use, and storage of cookies on your device for us and our partners. You can revoke your consent any time in your device browsing settings. Click “Cookies Policy” to check how you can control them through your device.