blog
Commercial Espionage: Using "Deepfake" Technology to Infiltrate Your Company

July 2022, the U.S. Federal Bureau of Investigation's Internet Crime Complaint Center (IC3) issued a new public service announcement warning that scammers are increasingly using "deepfake" technology to impersonate job seekers in remote job interviews, defrauding companies of payroll and stealing their trade secrets. 2024031806.png

The COVID-19 pandemic has driven the popularity of remote work. According to statistics, as of 2022, approximately 35% of people can work remotely full-time. The rise of remote work has brought many benefits to employers and employees. However, it has also created opportunities for fraudsters to exploit.

Commercial Espionage: The Full Process of Impersonating a Job Seeker in an Interview

From posting fake job postings to using "deepfake" technology to fake interviews, to finally impersonating others to get hired, scammers have orchestrated a detailed fraud process.

2024031807.png

First, scammers will cleverly post fake job postings on recruitment websites, using them as bait to attract real job seekers to apply for their resumes. This step is key for them to collect information on real job seekers.

Next, scammers will cleverly disguise themselves as job seekers and submit these fake resumes to real companies' job openings. Their goal is to deceive the company's initial screening and get an interview opportunity.

Once they successfully pass the resume screening and receive an interview notification, the scammers will use "deepfake" technology to precisely superimpose the job seeker's face onto their own, so that the virtual interviewee's actions and lip movements perfectly match the speaker's audio. Even coughing, sneezing, or other subtle auditory actions can be precisely aligned with the visual content, creating the appearance of a seemingly real interviewee that passes the company's remote interview.

Through such a well-designed fraud process, scammers successfully enter the company as job seekers.

Commercial Espionage: The Harm to Enterprises

The FBI did not specify the ultimate goal of the scammers in the announcement. However, the agency pointed out that these fake job seekers, once successfully interviewed and hired, will be allowed to access sensitive data such as "company personal identifiable information, financial data, company IT databases and/or proprietary information".

2024031808.png

Stealing confidential company information.

Once scammers are successfully employed, they may gain access to key confidential business information such as new products, confidential technologies, password accounts, and database information, and they can sell or leak this information at any time, causing huge financial losses to the company. For example, in the financial industry, data is usually heavily protected and can be difficult to access through traditional scams, but security or IT positions can access it directly.

Increase the company's recruitment costs.

When a company discovers that it has hired a fake job seeker, it not only needs to immediately dismiss the current imposter, but also needs to quickly start a new recruitment process to find a suitable replacement. This not only leads to a loss of salary, but also increases the time cost of recruitment and employment. Moreover, the re-recruitment process itself can be very expensive, and the longer it takes to find new candidates, the greater the loss to the company.

Conduct internal sabotage on the company.

Using traditional methods to install ransomware or shut down systems can be time-consuming and labor-intensive for scammers. However, once they successfully enter the target company by impersonating a job seeker, they can exploit security vulnerabilities in the company's system to install malware on the intranet and office system at any time.

Impersonating others for background checks.

The FBI pointed out that some victims reported that their personal information was stolen and used for pre-employment background checks on other job seekers. This suggests that scammers may use the stolen personal information to impersonate the victim to accept the investigation of the employer or background check agency. They will attempt to provide false or tampered information to deceive investigators in order to obtain the opportunity to pass the background check. Companies may make inappropriate hiring decisions based on false background check results, and other honest job seekers may lose the opportunity to compete fairly.

Affect the company's brand reputation.

The fraudulent behavior of impersonating others to enter the company can also cause serious damage to the company's reputation and trust. Customers and partners may have doubts about the company's security management capabilities, which may affect business cooperation and market competitiveness.

How HR Can Identify Fake Job Seekers

The Dingxiang Defense Cloud Business Security Intelligence Center released a special intelligence report "Research on "DEEPFAKE" Threats and Security Strategies" (free download), which systematically introduces the composition of "DEEPFAKE" threats, the harm of "DEEPFAKE", Which systematically introduces the composition of "deepfake" threats, the harm of "deepfake ", the process of "AI face-changing" fraud, typical threat patterns, the chain behind it, the current mainstream identification and detection strategies, regulatory regulations of various countries on "DEEPFAKE", and the responsibilities that each party needs to bear in "DEEPFAKE" fraud cases.

In response to the threat of being impersonated in remote interviews, the Dingxiang Defense Cloud Business Security Intelligence Center recommends that companies can adopt multiple technologies and methods.

1. Identification of "DEEPFAKE" fraudulent videos

(1) During video interviews, the other party can be asked to press their nose and face to observe the changes in their face. If it is a real person's nose, it will deform when pressed. The other party can also be asked to eat food and drink water to observe the changes in their face. Alternatively, ask them to do some strange actions or expressions, such as waving, making a difficult gesture, etc., to distinguish the truth from the false. During the waving process, the data of the face will be interfered, which will produce certain jitter, flicker, or some abnormal situations.

(2) "DEEPFAKE" can copy voices, but it may also contain unnatural tones, rhythms, or subtle distortions that will be particularly noticeable after careful listening. At the same time, voice analysis software can help identify voice abnormalities.

(3) The generative adversarial network (GAN) based on deep learning can train a neural network model called "discriminator". Through training, the "discriminator" can more accurately identify fake images. The goal is to distinguish real from fake image videos and identify any differences between the real version and the created version.

  1. Prevention of "DEEPFAKE" utilization

(1) By comparing and identifying device information, geographic location, and behavioral operations, abnormal operations can be detected and prevented. Dingxiang device fingerprint can identify legitimate users and potential fraudulent behaviors by recording and comparing device fingerprints. Its technology uniquely identifies and identifies each device, identifies virtual machines, proxy servers, emulators and other maliciously controlled devices, analyzes whether the device has abnormal behaviors such as multiple account logins, frequent IP address changes, and frequent device attribute changes, and helps Track and identify fraudulent activity.

(2) Frequent verification is required for account logins in different places, device changes, mobile phone number changes, dormant account activation, etc. In addition, continuous identity verification during the session is crucial, and persistent checks should be maintained to ensure that the user's identity remains consistent during use. Dingxiang's frictionless verification can quickly and accurately distinguish whether the operator is a human or a machine, accurately identify fraudulent behavior, and monitor and intercept abnormal behavior in real time. 2024031903.png

(3) Face anti-fraud system based on AI technology and artificial review to prevent fake videos and pictures of "DEEPFAKE". Dingxiang's full-link panoramic face security threat perception solution conducts real-time risk monitoring of face recognition scenarios and key operations (such as camera hijacking, device counterfeiting, screen sharing, etc.), and then performs face environment monitoring, liveness detection, image authentication, and intelligent verification. After verifying multiple dimensions of information, if forged videos or abnormal face information are found, abnormal or fraudulent operations can be automatically interrupted.

On-site interviews are an effective way to prevent fraud by impersonating others. Arrange on-site interviews as much as possible to communicate with job seekers face-to-face and better observe their words and deeds and their real situation. However, if on-site interviews are not possible, we can ask in-depth and specific questions during the interview. These questions may be difficult for scammers to answer, or may take a long time to think about and respond. In this way, companies can better identify the true abilities and background of job seekers.

2024-03-27
Copyright © 2024 AISECURIUS, Inc. All rights reserved
Hi! We are glad to have you here! Before you start visiting our Site, please note that for the best user experience, we use Cookies. By continuing to browse our Site, you consent to the collection, use, and storage of cookies on your device for us and our partners. You can revoke your consent any time in your device browsing settings. Click “Cookies Policy” to check how you can control them through your device.