blog
13-Year-Old Middle School Student Arrested on Charges of Creating Fake Pornographic Photos of Classmates Using AI Technology

On December 6, 2023, two students from Pinecrest Cove Academy in Miami, Florida, USA, were suspended. A few days later, on December 22, they were arrested by the Miami-Dade Police Department on charges of making nude photos of their classmates using "DEEPFAKE" technology. According to information released by the police, the two 13- and 14-year-old middle school boys were accused of using an unnamed "artificial intelligence application" to generate nude images of other students aged 12 to 13. According to a law passed in Florida in 2022, they were charged with a third-degree felony for transmitting pornographic images of "DEEPFAKE" without the victim's consent. Media reports suggest that this case may be the first criminal case in the United States involving AI-generated nude images. The Dingxiang Defense Cloud Business Security Intelligence Center released the "Research on "Deepfake" Threats and Security Strategies", which specifically mentioned that by imposing legal consequences on individuals who create or distribute deepfakes, the spread of harmful content can be prevented and those responsible for their actions can be held accountable. Criminalizing deepfake fraud can have a deterrent effect and prevent the abuse of this technology for fraud or other malicious purposes. It is an effective way to mitigate the harmful effects of deepfake technology and those who intentionally create or facilitate the spread of harmful deepfakes must be subject to criminal penalties.

EU's AI Act: Strengthening Regulation on "DEEPFAKE"

The governance of artificial intelligence (AI) is crucial to the future of humanity, and it is a common challenge faced by countries worldwide. Many countries and organizations have introduced initiatives or regulations, unanimously calling for strengthened security oversight of AI. Therefore, it is necessary for countries to implement strong legal measures to address this urgent challenge. 2024031901.png On March 13 local time, the European Parliament officially voted and approved the Artificial Intelligence Act (AIA), aiming to strictly regulate the use of AI. It is expected to come into effect in early 2025 and be implemented in 2026, with some provisions taking effect earlier. The draft of the AI Act was first proposed by the European Commission in April 2021. The Act will regulate high-impact, general-purpose AI models, and high-risk AI systems, which must comply with specific transparency obligations and EU copyright law. Regarding high-risk AI systems, the AI Act formulates a series of safety provisions and requirements, mainly including the following four aspects.

Data governance:

Ensure the legality, fairness, and security of data collection, processing, and use. Data subjects enjoy rights such as data access, correction, deletion, and restriction of processing.

Risk assessment:

Developers must conduct risk assessments to identify and mitigate potential risks posed by AI systems. The risk assessment must cover aspects such as the accuracy, reliability, security, fairness, and privacy protection of the AI system.

Transparency:

Provide information about the functionality, performance, and risks of AI systems to enable users to make informed choices. The information must be presented to users in a clear and understandable manner.

Human-machine interaction:

Be designed for human-machine interaction to ensure that users can understand and control the behavior of the AI system. Users must be able to terminate the operation of the AI system at any time.

2024031902.png The AI Act also requires that AI providers establish effective supervision and accountability mechanisms to ensure the safe and reliable operation of AI systems. Developers and users must be held responsible for any harm caused by AI systems. Penalties for violations of the regulations can reach up to 6% of the global turnover of the illegal act or 30 million euros, whichever is higher.

Countries: Strengthening the Security Regulation of "DEEPFAKE"

"DEEPFAKE" involves a variety of technologies and algorithms, which can work together to generate highly realistic images or videos. By piecing together the false content of "DEEPFAKE" with the elements of real information, it can be used to forge identities, spread misinformation, create false digital content, and engage in various frauds. This is a threat in the digital age, where attackers are unseen and elusive. They not only create information but also manipulate the reality structure perceived by each participant. Therefore, using laws to regulate the abuse of "DEEPFAKE" is a unified action of all countries. As early as November 2022, the Cyberspace Administration of China, the Ministry of Industry and Information Technology, and the Ministry of Public Security jointly issued the "Regulations on the Management of Deep Synthesis of Internet Information Services," which formulated a series of regulations and requirements for deep synthesis fraud. For deep-synthesized information such as intelligent dialogue, synthetic voices, facial generation, facial replacement, facial manipulation, virtual characters, and virtual scenes, it is required to be significantly labeled to avoid public confusion or misunderstanding. At the same time, the content of deep-synthesized information should be true and accurate, and deep-synthesized information should not be used to create fake news, distort the truth, mislead the public's perception, or damage the legitimate rights and interests of others. On March 1, 2024, the National Cybersecurity Standardization Technical Committee released the "Basic Requirements for the Security of Generated AI Services," which clarified the basic requirements for the security of generated AI services, including language material security, model security, and security measures. In terms of language material content security, service providers need to focus on three aspects: language material content filtering, intellectual property rights, and personal information. In terms of personal information, it emphasizes that before using language materials containing sensitive personal information, the corresponding individual's separate consent or compliance with other circumstances stipulated by laws and administrative regulations should be obtained. On February 8, 2024, the US Federal Communications Commission announced that AI-generated voices in robocalls were illegal, and legislators in various states had already introduced legislation to combat AI- generating false and incorrect information.

Technology: Preventing "DEEPFAKE" fraud

To combat "DEEPFAKE" fraud, in addition to legal regulations, technical identification and defense are also required: on the one hand, it is necessary to identify and detect forged videos, pictures, and information (effectively identifying out false and forged content); on the other hand, it is necessary to identify and detect the channels and platforms used for "DEEPFAKE" fraud (improve the security of digital accounts in multiple ways).

Enhance facial information security

The "EU AI Act" requires providers of AI tools to design these tools in such a way as to allow the detection of synthetic/false content, add digital watermarks, and promptly restrict the use of digital IDs for verifying personal identities. The working principle of AI watermarking is to embed a unique signal into the output of the artificial intelligence model, which can be an image or text, aiming to identify the content as AI-generated content to help others identify it effectively. In addition, digital signatures and blockchain ledgers have uniqueness, which can be used to track the source of behavior and mark it for review. Their immutability means that they can be used to detect any tampering with the original file by using hash functions to ensure the authenticity of digital content. Key functions such as timestamps and traceability can be used to determine the source and time of content creation. Of course, these data are operated in a secure environment called Trusted Executive Environment (TEE).

Identify forged false information

Based on deep learning, generative adversarial networks (GANs) can train a neural network model called "discriminator". Through training, the "discriminator" can more accurately identify the target of false images to distinguish between real and false images and videos, and identify any differences between the real version and the created version. In addition, big data models can quickly analyze large amounts of audio data to identify "DEEPFAKE" audio content. For example, Prisa Media, a Spanish media giant, has launched a tool called VerificAudio, which can identify real and forged voices in Spanish for the detection of "DEEPFAKE" audio. The tool is currently available to newsrooms in Mexico, Chile, Colombia, and Spain, and is working to roll it out to journalists around the world.

Ensure the security of face application

Based on AI technology and artificial review, face anti-fraud systems can prevent the use of false videos and false images from "DEEPFAKE". Dingxiang's full-link panoramic face security threat perception solution can effectively detect and identify false videos and false images from "DEEPFAKE". It can monitor real-time risks in face recognition scenarios and key operations (such as camera hijacking, device forgery, screen sharing, etc.), and then verify and identify through multi-dimensional information such as face environment monitoring, live detection, image authentication, and intelligent verification, and automatically block abnormal or fraudulent operations after discovering forged videos or abnormal face information. 2024031903.png By comparing and identifying device information, geographical location, and behavior operations, it is possible to detect and prevent abnormal operations from "DEEPFAKE" fraud. Dingxiang's device fingerprint technology can record and compare device fingerprints to identify virtual machines, proxy servers, simulators, and other devices that are maliciously manipulated, and analyze whether there are abnormal or inconsistent behaviors that do not conform to user habits, such as multi-account logins, frequent IP address changes, frequent device attribute changes, etc., to help track and identify the activities of fraudsters.

Ensure the security of content distribution channels

Social media platforms usually position themselves as merely channels for content, and the Australian Competition and Consumer Commission (ACCC) is suing Facebook. The ACCC also believes that Facebook should be held responsible as an accessory to fraud - because it failed to remove misleading advertisements in a timely manner after receiving a notice of a problem. At least, platforms should be responsible for promptly deleting deep-fake content for fraudulent purposes. Many platforms have claimed to be doing so, for example, on Facebook, any AI-generated content may display an icon clearly indicating that the content was generated by artificial intelligence. Social media companies have the greatest influence in restricting the spread of false content and can detect and remove it from their platforms. However, the policies of major platforms, including Facebook, YouTube, and TikTok, stipulate that they will only delete fraudulent content if it causes "serious harm" to people.

2024-03-20
Copyright © 2024 AISECURIUS, Inc. All rights reserved
Hi! We are glad to have you here! Before you start visiting our Site, please note that for the best user experience, we use Cookies. By continuing to browse our Site, you consent to the collection, use, and storage of cookies on your device for us and our partners. You can revoke your consent any time in your device browsing settings. Click “Cookies Policy” to check how you can control them through your device.