Current Business Matters | The European Union is Taking On AI

This piece focuses on the European Union AI Act from December 8, 2023. This provisional agreement is a historic milestone shaping the world's first comprehensive legislation on artificial intelligence and AI governance. This month's piece provides insights to the intricate system of AI risk classification, governance structures, and consequences for non-compliance.
Photo credits: Pixabay

Results of the final Artificial Intelligence Act trilogue negotiations 

On December 8, 2023, a significant milestone was achieved as the European Union institutions, the Commission, Council, and Parliament, reached a provisional agreement on the groundbreaking Artificial Intelligence (AI) Act.  

This marks a historic step towards enacting the world's first comprehensive legislation on artificial intelligence. The culmination of this legislative process awaits the finalization of technical details, after which the final text must undergo formal adoption by both the Parliament and Council and be published in the Official Journal to officially become EU law. Therefore, the adoption of the law is possible by mid-2024. It is essential to emphasize that the AI Act features a 24-month transition period. However, it's noteworthy that prohibitions will be implemented within a six-month timeframe, while the regulations for General Purpose AI are scheduled to take effect after 12 months. 

The primary goal of the law is to ensure that AI systems entering the European market and operating within the EU prioritize safety while safeguarding fundamental rights and upholding the values of the European Union. The agreement addresses critical aspects, including the precise definition of artificial intelligence, transparency requirements, the establishment of effective governance structures, and bans of certain high-risk AI applications. To guarantee clarity in delineating AI systems from simpler software, the compromise agreement harmonizes the definition with the framework suggested by the OECD. The final AI-Act trilogue negotiation took 36 hours in total and successfully resolved 21 outstanding issues. 

The AI Act categorizes AI applications into specific risk classes (unacceptable risk, high risk, limited risk, minimal risk) that determine the scope of legal regulations. The regulation of Artificial Intelligence applications is contingent upon their potential to cause harm. It covers applications that provide content, deliver predictions and recommendations, or influence user decision-making. Crucially, its scope goes beyond commercial applications to include the utilization of AI in the public sector, particularly in areas like law enforcement. The agreement prohibits cognitive behavioral manipulation, the indiscriminate harvesting of facial images from the internet or CCTV footage, social scoring, and biometric categorization systems aimed at deducing political, religious, philosophical beliefs, sexual orientation, and race. 

Regarding governance, national competent market surveillance authorities will oversee the enforcement of the new regulations on a national scale. Simultaneously, the establishment of a novel European AI Office within the European Commission will facilitate coordinated efforts at the European level. Non-compliant companies will face financial penalties. Fines for offenses related to prohibited AI applications will be substantial, amounting to either €35 million or 7% of the global annual turnover. Violations of other obligations stipulated in the rules will incur fines of €15 million or 3%. Additionally, supplying inaccurate information in the context of AI compliance will result in fines of €7.5 million or 1.5%. 

A key theme of the overall negotiation was the question of whether and how foundation models ought to be regulated. A foundational model is an artificial intelligence model trained on diverse datasets, enabling its application across a broad spectrum of use cases. The provisional compromise, despite the absence of such provisions in the original proposal, now encompasses the regulation of general-purpose AI systems and foundational models. The legislators have reached a consensus on implementing a tiered approach, strategically differentiating between models carrying systemic risks and those categorized as general-purpose artificial intelligence (GPAI). A GPAI model is deemed to pose a systemic risk if it surpasses a specified threshold of computational resources utilized during training.  All models are mandated to adhere to information-sharing obligations and fundamental documentation requirements, encompassing compliance with EU copyrights and disclosure of the content utilized during their training. Models identified with systemic risks must adhere to supplementary regulations, encompassing adversarial testing, reporting critical incidents to authorities, implementing robust cybersecurity measures, and meeting environmental standards. 

AmCham Germany's Position   

AmCham Germany strongly advocates for the elimination of obstacles for companies, especially those operating across borders. As highlighted in a previous Current Business Matters text, there is a clear need for an internationally accepted framework for international AI regulation. The transatlantic partnership needs to collaborate and work towards widely accepted norms and standards. The rapid evolution of AI technology and of application at an unprecedented pace necessitates a governance structure that can keep pace with these advancements. This means effectively regulating AI, creating an internationally acted framework, and ensuring the successful implementation of the AI Act will remain a challenge in the future. The experience with the GDPR showed how difficult and time-consuming the implementation phase can be. 

For more detailed information please contact:

Heather Liermann

Head of Department

Membership Engagement & Development