Notably, the surge of interest in AI among the public and thus intensified pressure has only taken root with the boom of generative AI - artificial intelligence that creates different types of media - spanning a mere year. This article explores the current global landscape of AI governance, focusing on the transatlantic partnership, the Trade and Technology Council (TTC) and the G7(+) Hiroshima Process.
The United States
On October 30, the Biden-Harris Administration issued the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. The order instructs federal agencies overseeing various sectors to establish standards and regulations for AI use. The Executive Order includes guidance on responsible AI deployment in areas like healthcare and criminal justice, with a focus on protecting civil rights. The directive also outlines guidelines for AI developers and introduces new reporting and testing requirements for companies handling large AI models. Emphasis is placed on encouraging the responsible creation and use of safer AI systems. This follows the voluntary safety commitments from leading AI companies in the U.S. (July 2023), National Institute of Standards and Technology’s AI Risk Management Framework (January 2023), and the White House Office of Science and Technology Policy’s non-binding guidelines with the Blueprint for an AI bill of rights (October 2022).
The European Union
AI-Policy within the EU is fully centered on the AI-Act, the first comprehensive and far-reaching law on AI. The AI Act is a broad legislative initiative aimed at regulating Artificial Intelligence through a risk-based approach. At the heart of the AI Act lies a risk-based classification system designed to assess the potential risk an AI technology may pose to an individual's health, safety, or fundamental rights. This framework comprises four risk categories: unacceptable, high, limited, and minimal. The obligations differ between the four categories, from little to no rules up to complete prohibition.
The EU’s legislative decision-making around the AI Act was started in April 2021. Currently, the bill is in the final phase of the legislative process, as key EU institutions convene in trilogues to finalize the law's key provisions. The major challenge within the intra-institutional negotiations is the regulation of foundation models, i.e., large pre-trained artificial intelligence models that serve as the basis for various natural language processing and machine learning tasks. In the regulation of foundational models, the Spanish Council presidency supports a tiered approach with prescriptive regulation, whereas a French-German-Italian coalition advocates for mandatory self-regulation through a code of conduct without an initial sanction regime. On Sunday, November 19, the Commission attempted to revive the tiered approach with a new compromise proposal. With just a few months remaining until the EU Parliament elections in 2024, reaching a compromise becomes critical, and the next trilogue date on December 6, 2023 is a pivotal deadline.
Trade and Technology Council
Within the Trade and Technology Council (TTC), which was established in 2021, the EU and the U.S. jointly work on AI related initiatives. In May 2023 they announced to work on a voluntary code of conduct. Its objective – once finalized - is to harmonize standards across jurisdictions by creating non-binding international guidelines for companies developing AI systems, pre-empting legislation in individual countries. This follows the joint Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management from December 2022. Such roadmap will contribute to establishing a shared repository of metrics for evaluating the trustworthiness of AI and methods for managing associated risks. Furthermore the TTC aimed to advance shared terminologies and taxonomies by defining 65 AI key words.
G7 Hiroshima Process
The Hiroshima Process International Code of Conduct for Advanced AI Systems, agreed upon by seven industrial countries on October 30. 2023, aims to ensure the safe and trustworthy deployment of advanced AI systems globally. Intended for various organizations, including those in academia, civil society, the private sector, and the public sector, these eleven principles build upon the OECD AI Principles. The document provides a framework for AI to maximize benefits and address associated risks and challenges. The code encourages companies to adopt suitable measures for recognizing, assessing, and mitigating risks throughout the AI lifecycle. It also emphasizes addressing incidents and patterns of misuse that may arise after AI products have been introduced to the market.
AmCham Germany's Position
Collaborative efforts within the transatlantic partnership are essential to establish a widely accepted framework for international AI regulation. A primary objective should be to eliminate obstacles for companies operating across borders by avoiding discrepancies in rules and technical standards and preventing excessive regulation. Given the significant global interest in the advancements and applications of AI, there exists a tangible opportunity for international cooperation, driven by the widespread preference for open technical systems and markets.
AmCham Germany strongly advocates for alignment between the U.S. and the EU. Despite variations in the domestic approaches to AI within the U.S. and the EU, both prioritize a risk-based strategy with a clear emphasis on security. It is crucial to align the overall priorities on the international stage to foster a cohesive and effective global approach to global AI regulation.