Discussion on Compliance: With the EU AI Act Coming into Effect, Are Chinese Companies Prepared for International Expansion?
On August 1, 2024, the Artificial Intelligence Act, a significant milestone in the European Union’s digital legislation, officially came into effect. This is the world’s first unified AI law based on risk classification governance. The EU aims to establish global governance standards for responsible AI through the “Brussels Effect” and hopes to serve as a model for legislation in other countries.
By constructing a pre-compliance “product compliance framework,” an in-process “full lifecycle compliance framework,” and a post-compliance “heavy penalty framework,” the EU seeks to address the challenge of balancing development (innovation) with safety (rights). This is reflected in the need to promote the development, market launch, service provision, and use of AI while ensuring that people are protected from AI-related harm, safeguarding their health, safety, and fundamental rights at a high level, and supporting technological innovation.
For Chinese companies looking to enter the EU market, the best choice is undoubtedly to fulfill compliance obligations in advance, meeting the technical requirements of the EU Artificial Intelligence Act to mitigate AI risks and avoid hefty penalties, while fully enjoying the benefits brought by the AI era.
We recommend adhering to the following strategies: First, define and control the boundaries of AI models and systems within the organization; second, develop a detailed AI governance plan within the organization, implement continuous risk prevention, and build a system; third, focus on the cybersecurity, personal information protection, and data security of AI models and systems, following the zero-trust principle; fourth, ensure risk balancing and decentralization of AI to avoid centralized system design and reduce impact; fifth, manage data compliance and governance throughout the entire lifecycle of AI systems; sixth, enhance AI literacy education within the organization, especially ensuring user training; seventh, achieve responsible and trustworthy AI within the organization through continuous ethical reviews, such as bias measurement.
In my recently published book, “Compliance Manual for the EU Artificial Intelligence Act,” I propose a nine-step compliance framework to guide Chinese companies in implementing the EU Artificial Intelligence Act.
Compliance Step 1: Confirm whether AI systems or models are involved.
First, the law defines the connotation of AI systems, emphasizing that they should possess five characteristics: reasoning ability, machine-based, goal-oriented, autonomy, and adaptability, to distinguish them from traditional software systems or rule-based systems defined solely by natural persons. Second, the law differentiates between AI systems and AI models; while models are an essential component of systems, they do not constitute a system by themselves and require additional components, such as user interfaces, to become a system.
Compliance Step 2: Confirm the legal status of the entities involved.
The second step requires determining the legal status of the entities involved in specific scenarios, including providers, deployers, importers, distributors, manufacturers of AI systems, EU authorized representatives, and affected individuals, with a focus on clearly distinguishing the legal responsibilities of AI system providers and deployers.
Compliance Step 3: Confirm whether it falls within the scope of the Artificial Intelligence Act.
First, it is necessary to determine whether it involves the seven types of entities mentioned above; second, assess whether it pertains to high-risk AI systems as defined in Article 6, paragraph 1, and related to products covered by EU harmonized legislation listed in Annex I, Section B. Finally, it is essential to evaluate whether it involves exemptions such as national security clauses, law enforcement and judicial cooperation clauses, research and development clauses, non-professional conduct clauses, labor rights clauses, and free and open-source clauses.
Compliance Step 4: Confirm whether it involves prohibited AI behaviors.
First, clarify the specific scope of prohibited AI behaviors, including using subconscious, manipulative, or deceptive techniques to distort behavior; exploiting defects related to age, disability, or socioeconomic status to distort behavior; social scoring that may lead to harmful or adverse treatment; assessing criminal risk based solely on user profiles; compiling facial recognition databases without targeting; inferring individuals’ emotions in workplaces or educational institutions; inferring sensitive data through biometric classification systems; and using real-time remote biometric recognition systems for law enforcement purposes in public places. Second, define the “exemption” conditions for using real-time remote biometric recognition systems for law enforcement in public places, which are limited to: searching for specific crime victims; threats to individuals’ lives or personal safety or terrorist attacks; identifying the location or identity of criminal offenders or suspects, with a potential penalty of at least four years of imprisonment. The system’s purpose must be limited to confirming the identity of specific individuals and should only be applied in situations that are absolutely necessary in terms of time, geography, and personal aspects.
Compliance Step 5: Confirm whether it involves high-risk AI systems.
First, define the classification rules and specific scope for high-risk AI systems. The positive list includes safety components or products covered in Annex I, which require conformity assessments; high-risk AI systems in Annex III include legally authorized biometric recognition systems, systems involving critical infrastructure, systems related to education and vocational training, employment and labor management systems, basic services and welfare systems, systems for legally authorized law enforcement actions, systems for immigration, asylum, and border control management, and systems for judicial and democratic processes; the negative list includes systems that only involve procedural tasks, improve human activity outcomes, detect decision-making patterns or deviations, and preparatory work without profiling individuals. Second, define the requirements for high-risk AI systems, which involve risk management systems, data governance, technical documentation, record-keeping, transparency, human oversight, as well as accuracy, robustness, and cybersecurity requirements. Third, define the obligations of providers of high-risk AI systems. In addition to ensuring compliance with the aforementioned requirements, they must fulfill obligations related to quality management systems, documentation, automatic log generation, completing conformity assessments, making conformity declarations, affixing CE marks, registration, and taking necessary corrective actions. Finally, in specific cases, they must also fulfill obligations related to basic rights impact assessments and conformity assessments.
Compliance Step 6: Confirm whether it involves specific AI systems.
This step involves the transparency obligations of specific AI systems, specifically concerning the obligations of providers of generative AI systems, deployers of emotion recognition systems or biometric classification systems, and deployers of AI systems that constitute deepfakes.
Compliance Step 7: Confirm whether it involves general AI models.
First, define the obligations of providers of general AI models, which include preparing technical documentation, compiling information and documents, establishing copyright-respecting policies, and publishing detailed summaries of model training content. Second, define the additional obligations of providers of general AI models that pose systemic risks, including model evaluation, systemic risk assessment, tracking, recording and reporting, and ensuring cybersecurity.
Compliance Step 8: Confirm the identification of supervisory authorities and penalties.
First, determine the regulatory framework. The EU’s regulatory bodies include the AI Office, the European AI Board, advisory forums, and independent expert scientific groups; while the regulatory bodies of member states mainly involve market supervision authorities and notification authorities. Second, determine specific penalties: for prohibited AI behaviors, the maximum fine is €35 million or 7% of the previous year’s global annual revenue; for high-risk and specific AI systems, the maximum fine is €15 million or 3% of the previous year’s global annual revenue; for scenarios involving misinformation in response to supervisory authorities, the maximum fine is €7.5 million or 1% of the previous year’s global annual revenue; for providers of general AI models, the maximum fine is €15 million or 3% of the previous year’s global annual revenue.
Compliance Step 9: Confirm whether the AI regulatory sandbox applies.
The development and innovation of AI require an AI regulatory sandbox as a mechanism for fault tolerance and a flexible framework. Specific systems involve goal setting, exemptions from penalty liability, implementation of legislation, handling of personal data, conditions for real-world testing, and exemptions for small and medium-sized enterprises.
(The author is the Executive Director of the Data Law Research Center at Shanghai Jiao Tong University.)