blog
AI Security: Government AI Regulatory Measures Focus on Six Key Areas

The rapid development of global artificial intelligence technology has a profound impact on economic and social development and the progress of human civilization, and has brought huge opportunities to the world. At the same time, artificial intelligence technology also brings unpredictable risks and complex challenges.

2023110801.jpg

Due to the ambiguity in the technical logic and application process of artificial intelligence (AI), a number of risks may arise, including data, algorithmic risk, ethical risk, technology abuse risk, and cyber attack risk. These risks may not only pose a threat to individual privacy and corporate interests, but also have a negative impact on the fairness and stability of society as a whole.

First, algorithmic risk. Since AI is a "black box" based on big data, deep learning, and computer algorithms, its decision-making logic and basis are often difficult to explain, leading to uncertainty risks. In some applications where high fault tolerance is required, this uninterpretability may even lead to security risks.

Second, data risk. Mining the value of data is the key to improving the ability of artificial intelligence, but the circulation of sensitive information such as personal privacy has the risk of being leaked and abused. Once data forms an island due to security considerations, it will restrict the generation of data element value and the development of the AI industry. In addition, using copyrighted material to train AI models can lead to copyright disputes, while inputting information related to identified or identifiable natural persons can raise issues such as trade secret disclosure.

The social ethical risk cannot be ignored. Big data killing, gender discrimination, racial discrimination, regional discrimination, etc., may lead to social equity problems. Although AI is based on massive amounts of data, this data is often biased, making decisions based on this data likely to exacerbate social inequities. In addition, the design process of the algorithm may be affected by the value orientation of the developer, which makes the fairness of the transaction decision difficult to be effectively guaranteed.

The risk of technology misuse is of concern. The use of AI to produce fake news, fake news, fake accounts, fake voices, fake pictures and other behaviors has an increasingly serious impact on society. These behaviors may harm social and economic security, corporate personal reputation and personal property safety. With the continuous development of deep synthesis technology, the use of this technology for fraud, blackmail, framing, slander and other illegal acts and cases are common.

Finally, there is the risk of serious cyber attacks. Attackers may use security vulnerabilities in AI systems to carry out attacks, such as hijacking, attacking, blocking and interfering with AI learning, prediction, etc. In addition, attackers can also use artificial intelligence techniques to launch attacks, such as machine learning algorithms to infer a user's password or encry

AI Regulation: Global Calls for Strengthened AI Security Regulation

Ai governance has a bearing on the fate of all mankind and is a common task facing all countries in the world. Since the beginning of this year, many countries and organizations around the world have introduced initiatives or norms, and unanimously required to strengthen the safety supervision of artificial intelligence. Artificial intelligence bid farewell to extensive development and ushered in a synchronous stage of security and development.

2023110802.png

The Declaration encourages relevant actors to take appropriate measures, such as safety testing, evaluation, etc., to measure, monitor, and mitigate the potentially harmful capabilities of AI and its possible impacts, and to provide transparency and accountability. Countries are called upon to develop risk-based policies based on risk, including the development of appropriate assessment indicators, safety testing tools, and the development of public sector capacity and scientific research. We also resolve to support the establishment of an internationally inclusive, cutting-edge AI security science research network that includes and complements existing and new multilateral and bilateral cooperation mechanisms to promote the best science for decision-making and the public interest through existing international forums and other relevant initiatives.

On October 30, the Group of Seven (G7) issued an International Code of Conduct for organizations Developing Advanced Artificial Intelligence Systems. The 11-point code of conduct highlights measures to be taken during the development process to ensure reliability, security and assurance. Among them, developers need to identify and mitigate risks, including red team testing, testing, and mitigation measures. At the same time, developers also need to identify and reduce vulnerabilities, incidents, and misuse patterns after deployment, including monitoring vulnerabilities and incidents, and facilitating third parties and users to find and report issues. In addition, the guidelines highlight the importance of developing and deploying reliable content authentication and provenance mechanisms, such as watermarking. These measures will help ensure the safety and reliability of AI systems and increase user trust in them.

Also on October 30, US President Joe Biden officially issued the "Safe, Reliable and Trustworthy Artificial Intelligence" executive Order, which is the White House's first set of regulations on generative artificial intelligence. The executive order requires multiple U.S. government agencies to develop standards, test artificial intelligence products, seek the best methods for content verification such as "watermarking," develop cybersecurity plans, and attract technical talent to protect privacy, promote fairness and civil rights, safeguard the interests of consumers and workers, promote innovation and competition, and advance U.S. leadership. At the same time, the executive order states that it will protect U.S. users from AI fraud and deception by establishing standards for detecting AI-generated content and certifying official content.

On October 18, the CAC issued the "Global Artificial Intelligence Governance Initiative", with specific measures including promoting the establishment of a risk level test and evaluation system, implementing agile governance, classification and classification management, and rapid and effective response. Research and development entities need to improve the explainability and predictability of AI, improve the authenticity and accuracy of data, ensure that AI is always under human control, and create AI technologies that can be audited, supervised, traceable, and trusted. At the same time, actively develop the development and application of relevant technologies for artificial intelligence governance, support the use of artificial intelligence technology to prevent risks and improve governance capabilities. In addition, the initiative also emphasizes the gradual establishment and improvement of laws and regulations to protect personal privacy and data security in the research and development and application of artificial intelligence, and opposes illegal collection, theft, tampering and disclosure of personal information.

On July 13, the National Cyberspace Administration jointly issued the "Interim Measures for the Management of Generative Artificial Intelligence Services" with relevant state departments. Requiring generative artificial intelligence services with public opinion attributes or social mobilization capabilities, security assessments shall be carried out in accordance with the relevant provisions of the State, and in accordance with the Provisions on the Management of Internet Information Service algorithm Recommendation, algorithm filing and change, cancellation filing procedures.

In June this year, the European Parliament passed the draft authorization of the European Union Artificial Intelligence Act, which, if formally approved, will be the world's first regulation on AI. The bill classifies AI systems into four categories based on the level of risk, from minimal to unacceptable. Among them, "technology robustness and security" requires AI systems to minimize accidental harm during development and use, and to have robust capabilities to respond to unexpected problems in order to prevent malicious third parties from illegally using the system or changing its use or performance. In addition, the bill prohibits the creation or expansion of facial recognition databases by untargeted extraction of facial images from the Internet or CCTV footage, and prohibits the use of AI systems in this way to place on the market, put into service or use. For generative AI systems based on these models, the bill requires compliance with transparency requirements, that is, it must be disclosed that the content is generated by the AI system, and ensure that illegal content is prevented from being generated. In addition, when using copyrighted training data, a detailed summary of that data must be made public.

In addition, in late October, scholars such as Turing Award winners and the "Big Three of Artificial Intelligence" broke out a fierce debate on the control of artificial intelligence. A group of 24 Chinese and foreign artificial intelligence scientists have signed a statement calling for stricter controls on artificial intelligence technology. Establish an international regulatory body, subject advanced AI systems to mandatory registration and audit, introduce instant "shutdown" procedures, and require developers to spend 30% of their research budgets on AI safety.

Regulation: AI Security Regulation Suggestions Focus on Six Key Areas

Although there are differences in the priorities of regulation, and the artificial intelligence community and industry debate is fierce, governments have basically reached a consensus on strengthening artificial intelligence regulation. At present, the supervision of artificial intelligence mainly focuses on six aspects: security testing and evaluation, content authentication and watermark marking, prevention of information abuse, forced closure procedures, independent regulators, and risk identification and security.

2023110803.png

Safety testing and evaluation: Require safety testing and evaluation of AI systems to measure, monitor, and mitigate potentially harmful capabilities, and provide transparency and accountability. Developers are required to share security test results and other critical information with the government to ensure that the system is secure and reliable before release.

Content Authentication and watermarking: Establish standards for detecting AI-generated content and authenticating official content to protect users from AI fraud and deception. Emphasis is placed on the development and deployment of reliable content authentication and provenance mechanisms such as watermarking.

Ensure the safety of the face: the face is an important private data of artificial intelligence, and it is also an important application output, and it is particularly important to prevent abuse. It is prohibited to build or expand facial recognition databases by untargeted extraction of facial images from the Internet or CCTV footage. For generative AI systems, it is required to comply with transparency requirements, disclose how content is generated and prevent the generation of illegal content.

Risk identification and security: AI systems are required to be robust and secure, to minimize accidental harm, and to respond to problems and malicious use. Developers need to identify and reduce vulnerabilities, incidents, and misuse patterns after deployment, including monitoring vulnerabilities and incidents, and facilitating users and third parties to find and report issues.

Set up a forced shutdown: Introduce an instant "one-click shutdown" feature to prevent unexpected situations or malicious use of AI programs.

Establish independent tripartite institutions: promote the establishment of an international inclusive cutting-edge AI security scientific research network, establish an international regulatory body, and require advanced AI systems to undergo mandatory registration and audit.

Dingxiang: Four Abilities to Ensure AI Security

Based on the status quo of artificial intelligence risks and the security supervision needs of various countries, Dingxiang provides four security capabilities of artificial intelligence security detection, security intelligence, security defense and face security.

Ai system security testing: Conduct comprehensive security testing of AI applications, products and apps, identify and detect potential security vulnerabilities, and provide timely repair recommendations. This detection mechanism can prevent attackers from using security vulnerabilities to carry out malicious attacks.

Artificial intelligence threat intelligence: Dingxiang cloud service security intelligence provides artificial intelligence attack intelligence from many angles, combining technology and expert experience to comprehensively predict the threat mode of attackers. Help organizations respond and deploy accurately to protect their AI systems from potential threats. 20231104.jpg

Whole process security defense: Dingxiang defense cloud is used to harden artificial intelligence applications, apps, and devices and protect them with code confusion to improve their security. At the same time, the confusing encryption of artificial intelligence data communication transmission can prevent problems such as eavesdropping, tampering and fraudulent use in the process of information transmission. In addition, with the Dinsigh risk control decision engine of Dingxiang, the equipment environment can be comprehensively detected, all kinds of risks and abnormal operations can be found in real time, and the overall safety can be improved. In addition, the Xintell modeling platform of Dingxiang can provide strategic support for artificial intelligence security, timely mining potential risks and unknown threats.

Face application security protection: Dingxiang service security awareness defense platform Based on threat probe, stream computing, machine learning and other advanced technologies, set equipment risk analysis, operation attack identification, abnormal behavior detection, early warning, protection and disposal as one of the active security defense platform, can be found in real time camera hijacking, equipment forgery and other malicious behaviors, effectively prevent and control all kinds of face application risks. It features threat visualization, threat traceability, device association analysis, multi-account management, cross-platform support, active defense, open data access, defense customization, and whole-process prevention and control.

Through the four-fold security capability of Dingxiang, enterprises can better protect their AI systems from security risks and threats of attacks, improve the security of AI applications and meet the security regulatory needs of various countries.

2023-11-10
Copyright © 2024 AISECURIUS, Inc. All rights reserved
Hi! We are glad to have you here! Before you start visiting our Site, please note that for the best user experience, we use Cookies. By continuing to browse our Site, you consent to the collection, use, and storage of cookies on your device for us and our partners. You can revoke your consent any time in your device browsing settings. Click “Cookies Policy” to check how you can control them through your device.