Current efforts, gaps and building blocks for an international AI governance framework.
We need international coordination to ensure the benefits of AI are distributed across the globe. Poor coordination is likely to result in concentration of benefits and power in a small number of hands.
Risks and harms from AI are transboundary–they are not contained to a single jurisdiction and will affect all of humanity as they continue to materialize.
To reduce the complexity of international AI governance, it is useful to divide it into components or 'building blocks' that can be analyzed in more depth.
Based on the findings of recent comparative analyses of analogous international governance regimes (most notably, Villalobos, Maas and Winter, forthcoming; Maas and Villalobos, 2023; and Cass-Beggs, Clare and others, 2024), we consider the following building blocks for international AI governance:
Red lines, safety standards, technical cooperation, emergency response, benefit sharing, compliance mechanisms and institutional arrangements.
Ensuring that we all benefit from AI while its risks are sufficiently mitigated will require one or more international institutions that implement and enforce international rules on AI. Dozens of institutional designs for AI governance have been recommended by experts, States and international organizations. With the potential exception of the forthcoming UN Independent International Scientific Panel on AI and the Global Dialogue on AI Governance, these institutions have not been created yet.
The most effective international agreements contain robust verification and enforcement mechanisms. States, whether adversarial or not, may be reluctant to enter an international agreement on AI without reliable, secure and privacy-preserving methods to produce timely information on compliance. Hardware-enabled mechanisms have emerged as promising policy solutions for verification and enforcement in international AI governance.
While AI poses serious risks, it also promises to be one of the most positively transformative technologies in the history of humanity. However, it is almost certain that AI benefits are not going to be distributed evenly or widely accessible by default. Conscious of this reality, and of not being left behind, many countries that are not at the frontier of AI development are likely to condition their participation in an international AI governance regime on the distribution of its benefits.
Advanced AI systems pose a plausible threat to international security given that they can facilitate cyberattacks on critical national infrastructure or aid malicious actors in the creation of weapons of mass destruction. These and other scenarios could lead to global emergencies, with transboundary consequences that require a coordinated international response.
As an emerging and fast-paced technology, AI presents significant challenges for the identification and evaluation of risks, as well as for technological sovereignty. International technical cooperation among States and other actors may be required to build capacity, reach scientific consensus on the risks and benefits of AI, conduct joint safety and security research, develop technical mechanisms for verification and enforcement and AI for good, among other areas.
Click one of the topics above the table to filter it by topic and find related research questions and literature.
Many leading AI experts and governments agree that certain practices relating to AI should be internationally prohibited to reduce the likelihood of global scale harm. Top scientists from the United States and China agree that red lines for AI development should include autonomous replication or improvement, power seeking, assistance with weapon development, cyberattacks and deception.
Many international efforts on AI governance focus on the development of principles and best practices to mitigate risks from AI. These can serve as the basis for international technical standards that establish requirements for advanced AI developers and enhance coherence, harmonization, and regulatory effectiveness across jurisdictions.
View document
View document
View document
View document
View document
View document