blog
New Threat: Someone is Using Your Face for Online Livestreaming

Earlier this year, 20-year-old Ukrainian internet sensation Olga Loiek encountered a group of AI-generated clones of herself online. "I was really scared. I saw my face being used to promote products in Chinese," she told the media. These presenters selling products on social media platforms have their faces and voices generated using AI face-swapping technology, portraying themselves as foreigners in China selling foreign specialties. Accounts using Olga's likeness and voice number close to a hundred, spanning across mainstream social media platforms in China, with followers ranging from thousands to hundreds of thousands. One fake account even amassed 300,000 followers. 2024040807.png

Olga claims that her likeness may be used for fraudulent promotional activities, becoming a means for profiteers to make money.

Stolen Faces: Becoming Profitable Anchors for Others

Livestream e-commerce typically involves hosts showcasing a range of products to their followers, who can then purchase them through integrated buy buttons. Despite sounding somewhat mundane, these shows attract a large audience: last year, over 1 trillion yuan worth of goods were sold through live commerce, and there are now over 1 million people acting as livestream hosts in China. 2024041602.png The integration of AI clones and face-swapping-generated virtual influencers into Chinese e-commerce and livestreaming represents a significant evolution in brand engagement strategies. These AI digital humans can be tailored to clients' needs, capable of livestreaming 24 hours a day, achieving a "non-stop" work ethic to mimic the effect of human hosts even during "non-peak hours." Moreover, these virtual hosts enable brands to precisely control their messaging, ensuring consistent brand representation and operational efficiency. Compared to traditional influencer partnerships, they offer potentially lower costs, less unpredictability, and very low operating costs. If there are no bizarre interaction issues, digital personas might never reveal themselves.

Creating virtual hosts is extremely simple, requiring only a single photo for cloning, with the fastest turnaround being just 60 seconds.

However, this innovative sales approach brings about numerous complex implications. Worse, it has sparked the misuse of technology, cloning faces without the consent of the individuals involved for commercial purposes or fraud. Olga Loiek's experience is a notable example. Attackers only need to download photos, videos, audio, and activities shared by the victim on social media, requiring only 30 seconds to 1 minute of sample material to create highly realistic voices and images, thus recreating a presenter for video livestreaming.

Dingxiang Defense Cloud Business Security Intelligence Center's Intelligence Digest on "DEEPFAKE" Threat Research and Security Strategies points out that another major factor contributing to the proliferation of AI fraud is the accessibility of AI tools, making the creation of false videos and images even easier. Especially with the emergence of Cybercrime as-a-Service, ordinary people can easily purchase AI fraud services or technologies. Attackers use a range of media, including social media, email, remote meetings, online recruitment, and news information, to carry out various AI fraud attacks against businesses and individuals.

User: How to prevent face theft?

The security threats brought by AI are increasingly attracting attention and quickly becoming a new means of threat, posing unprecedented risks to both businesses and individual users.

The Intelligence Digest on "DEEPFAKE" Threat Research and Security Strategies suggests that to prevent and combat AI fraud, it is necessary to effectively identify and detect AI-generated content on one hand, and prevent the exploitation and spread of AI fraud on the other. This requires not only technical countermeasures but also complex psychological warfare and an increase in public security awareness. Therefore, enterprises need to strengthen digital identity recognition, review account access permissions, and minimize data collection. Additionally, enhancing employees' awareness of how to detect AI threats is crucial.

1. Establish an AI-driven security tool system.

Implement anti-fraud systems that combine manual review with AI technology to enhance automation and efficiency in detecting and responding to AI-based cyber attacks.

2. Strengthen identity verification and protection.

This includes enabling multi-factor authentication, encrypting data at rest and in transit, and implementing firewalls. Strengthen frequent verification for account activities such as remote logins, device changes, phone number changes, and sudden account activity spikes to ensure consistent user identity during usage. Compare and identify device information, geographic locations, and behavioral operations to detect and prevent abnormal activities.

3. Enhance account authorization control.

Restrict access to sensitive systems and accounts based on the principle of least privilege, ensuring access only to the resources necessary for their role, thereby minimizing the potential impact of account hijacking and preventing unauthorized access to your systems and data.

4. Stay informed about the latest technology and threats.

Stay updated on the latest developments in AI technology to adjust security measures accordingly. Continuous research, development, and updates of AI models are crucial to maintaining a leading position in the increasingly complex security landscape.

5. Reduce the sharing of sensitive information on social media.

Minimize or eliminate sharing sensitive information such as account services, family members, transportation, job positions, etc., on social media to prevent fraudsters from downloading and forging images and voices for identity fraud.

Traditional security tools and measures are no longer effective in defending against fraud threats based on AI. Comprehensive risk prevention and control measures are required before, during, and after identifying fraud risks. Dingxiang's latest upgraded anti-fraud technology and security products can help enterprises build a multi-channel, all-scenario, multi-stage security system, effectively combating new threats brought by AI. This includes App reinforcement for App security, atbCAPTCHA to prevent malicious AI registration and login, device fingerprinting to identify AI-forged devices, Dinsight to discover potential fraud threats and prevent complex AI attacks, and a comprehensive facial security threat perception solution to intercept "DEEPFAKE" attacks along the entire chain.

Through a multi-channel, all-scenario, multi-stage security system based on threat perception, security protection, data precipitation, model construction, and strategy sharing, various security services can meet different business scenarios, possess industry-specific strategies, and achieve precipitation and iteration based on their own business characteristics, achieving precise platform control to rapidly and effectively

2024-04-24
Copyright © 2024 AISECURIUS, Inc. All rights reserved
Hi! We are glad to have you here! Before you start visiting our Site, please note that for the best user experience, we use Cookies. By continuing to browse our Site, you consent to the collection, use, and storage of cookies on your device for us and our partners. You can revoke your consent any time in your device browsing settings. Click “Cookies Policy” to check how you can control them through your device.