Bridging the Artificial Intelligence Governance Gap: The United States' and China's Divergent Approaches to Governing General-Purpose Artificial Intelligence
- URL: http://arxiv.org/abs/2506.03497v1
- Date: Wed, 04 Jun 2025 02:24:27 GMT
- Title: Bridging the Artificial Intelligence Governance Gap: The United States' and China's Divergent Approaches to Governing General-Purpose Artificial Intelligence
- Authors: Oliver Guest, Kevin Wei,
- Abstract summary: U.S. and China are among the world's top players in the development of advanced artificial intelligence (AI) systems.<n>A look at U.S. and Chinese policy landscapes reveals differences in how the two countries approach the governance of AI systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The United States and China are among the world's top players in the development of advanced artificial intelligence (AI) systems, and both are keen to lead in global AI governance and development. A look at U.S. and Chinese policy landscapes reveals differences in how the two countries approach the governance of general-purpose artificial intelligence (GPAI) systems. Three areas of divergence are notable for policymakers: the focus of domestic AI regulation, key principles of domestic AI regulation, and approaches to implementing international AI governance. As AI development continues, global conversation around AI has warned of global safety and security challenges posed by GPAI systems. Cooperation between the United States and China might be needed to address these risks, and understanding the implications of these differences might help address the broader challenges for international cooperation between the United States and China on AI safety and security.
Related papers
- The Singapore Consensus on Global AI Safety Research Priorities [128.58674892183657]
"2025 Singapore Conference on AI (SCAI): International Scientific Exchange on AI Safety" aimed to support research in this space.<n>Report builds on the International AI Safety Report chaired by Yoshua Bengio and backed by 33 governments.<n>Report organises AI safety research domains into three types: challenges with creating trustworthy AI systems (Development), challenges with evaluating their risks (Assessment) and challenges with monitoring and intervening after deployment (Control)
arXiv Detail & Related papers (2025-06-25T17:59:50Z) - Promising Topics for U.S.-China Dialogues on AI Risks and Governance [0.0]
Despite strategic competition, there exist concrete opportunities for bilateral U.S.-China cooperation in the development of responsible AI.<n>We analyze more than 40 primary AI policy and corporate governance documents from both nations.<n>Our analysis contributes to understanding how different international governance frameworks might be harmonized to promote global responsible AI development.
arXiv Detail & Related papers (2025-05-12T11:56:19Z) - AI Governance to Avoid Extinction: The Strategic Landscape and Actionable Research Questions [2.07180164747172]
Humanity appears to be on course to soon develop AI systems that substantially outperform human experts.<n>We believe the default trajectory has a high likelihood of catastrophe, including human extinction.<n>Risks come from failure to control powerful AI systems, misuse of AI by malicious rogue actors, war between great powers, and authoritarian lock-in.
arXiv Detail & Related papers (2025-05-07T17:35:36Z) - AI threats to national security can be countered through an incident regime [55.2480439325792]
We propose a legally mandated post-deployment AI incident regime that aims to counter potential national security threats from AI systems.<n>Our proposed AI incident regime is split into three phases. The first phase revolves around a novel operationalization of what counts as an 'AI incident'<n>The second and third phases spell out that AI providers should notify a government agency about incidents, and that the government agency should be involved in amending AI providers' security and safety procedures.
arXiv Detail & Related papers (2025-03-25T17:51:50Z) - Position: Mind the Gap-the Growing Disconnect Between Established Vulnerability Disclosure and AI Security [56.219994752894294]
We argue that adapting existing processes for AI security reporting is doomed to fail due to fundamental shortcomings for the distinctive characteristics of AI systems.<n>Based on our proposal to address these shortcomings, we discuss an approach to AI security reporting and how the new AI paradigm, AI agents, will further reinforce the need for specialized AI security incident reporting advancements.
arXiv Detail & Related papers (2024-12-19T13:50:26Z) - AI, Global Governance, and Digital Sovereignty [1.3976439685325095]
We argue that AI systems will embed in global governance to create dueling dynamics of public/private cooperation and contestation.
We conclude by sketching future directions for IR research on AI and global governance.
arXiv Detail & Related papers (2024-10-23T00:05:33Z) - US-China perspectives on extreme AI risks and global governance [0.0]
We sought to better understand how experts in each country describe safety and security threats from advanced artificial intelligence.
We focused our analysis on advanced forms of artificial intelligence, such as artificial general intelligence (AGI)
Experts in both countries expressed concern about risks from AGI, risks from intelligence explosions, and risks from AI systems that escape human control.
arXiv Detail & Related papers (2024-06-23T17:31:27Z) - Taking control: Policies to address extinction risks from AI [0.0]
We argue that voluntary commitments from AI companies would be an inappropriate and insufficient response.
We describe three policy proposals that would meaningfully address the threats from advanced AI.
arXiv Detail & Related papers (2023-10-31T15:53:14Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - International Institutions for Advanced AI [47.449762587672986]
International institutions may have an important role to play in ensuring advanced AI systems benefit humanity.
This paper identifies a set of governance functions that could be performed at an international level to address these challenges.
It groups these functions into four institutional models that exhibit internal synergies and have precedents in existing organizations.
arXiv Detail & Related papers (2023-07-10T16:55:55Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.