Towards an Atomic Agency for Quantum-AI
- URL: http://arxiv.org/abs/2505.11515v1
- Date: Tue, 06 May 2025 14:17:43 GMT
- Title: Towards an Atomic Agency for Quantum-AI
- Authors: Mauritz Kop,
- Abstract summary: This essay analyzes emerging AI & quantum technology (incl. quantum-AI hybrids) regulation, export controls, and standards in the US, EU, & China.<n>It posits risks from a US 'Washington effect' (premature regulation under uncertainty) and a Chinese 'Beijing effect' (exporting autocratic norms via standards/Digital Silk Road)<n>It explores pathways toward a harmonized Quantum Acquis Planetaire, anchored in universal values ('what connects us') via foundational standards and agile legal guardrails.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This essay analyzes emerging AI & quantum technology (incl. quantum-AI hybrids) regulation, export controls, and standards in the US, EU, & China, comparing legislative efforts anno 2025 to balance benefits/risks via their distinct innovation systems. While finding convergence on needing responsible governance despite differing philosophies, it posits risks from a US 'Washington effect' (premature regulation under uncertainty) and a Chinese 'Beijing effect' (exporting autocratic norms via standards/Digital Silk Road), exacerbated by export controls and decoupling. Faced with planetary challenges, it explores pathways toward a harmonized Quantum Acquis Planetaire, anchored in universal values ('what connects us') via foundational standards and agile legal guardrails. Smart regulation must incentivize responsible behavior (e.g., RQT by design) and ensure equitable benefit/risk distribution, requiring cooperative stewardship and strategic Sino-American recoupling. This could be coupled with collaborative research platforms for quantum and AI (which are increasingly interdependent) akin to CERN or ITER - emulating successful international resource pooling to foster coordinated responsible innovation. Realizing goals like fault tolerant quantum-centric supercomputing, algorithmic development and use case discovery requires such collective global expertise, challenging protectionist measures that stifle collaboration and supply chains. The Quantum Acquis Planetaire, envisioned as a global body of Quantum Law, could be codified via a UN Quantum Treaty inspired by precedents like the 2024 UN AI Resolution and 1968 Nuclear Non-Proliferation Treaty (NPT), designed to align quantum advancements with global imperatives such as the UN SDGs. To enforce it, manage arms race risks, and ensure non-proliferation, an 'Atomic Agency for Quantum-AI' (modeled on IAEA safeguards) warrants serious examination.
Related papers
- The Nexus of Quantum Technology, Intellectual Property, and National Security: An LSI Test for Securing the Quantum Industrial Commons [0.0]
Quantum technologies have moved from laboratory curiosities to strategic infrastructure.<n>China's quantum program is centrally mobilized under military-civil fusion.<n>U.S. and allies should pursue security-sufficient openness, operationalized through a least-trade-restrictive, security-sufficient, innovation-preserving (LSI) test.
arXiv Detail & Related papers (2026-02-11T04:21:56Z) - Towards a European Quantum Act: A Two-Pillar Framework for Regulation and Innovation [0.0]
Quantum technologies promise transformative advancements but pose significant dual-use risks.<n> Realizing their potential while mitigating risks requires a robust, anticipatory, and harmonized EU regulatory framework.<n>We propose the EU Quantum Act should be a two-pillar instrument, combining New Legislative Framework-style regulation with an ambitious Chips Act-style industrial and security policy.
arXiv Detail & Related papers (2025-09-13T16:25:25Z) - Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance [211.5823259429128]
We propose a comprehensive framework integrating technical and societal dimensions, structured around three interconnected pillars: Intrinsic Security, Derivative Security, and Social Ethics.<n>We identify three core challenges: (1) the generalization gap, where defenses fail against evolving threats; (2) inadequate evaluation protocols that overlook real-world risks; and (3) fragmented regulations leading to inconsistent oversight.<n>Our framework offers actionable guidance for researchers, engineers, and policymakers to develop AI systems that are not only robust and secure but also ethically aligned and publicly trustworthy.
arXiv Detail & Related papers (2025-08-12T09:42:56Z) - The Singapore Consensus on Global AI Safety Research Priorities [128.58674892183657]
"2025 Singapore Conference on AI (SCAI): International Scientific Exchange on AI Safety" aimed to support research in this space.<n>Report builds on the International AI Safety Report chaired by Yoshua Bengio and backed by 33 governments.<n>Report organises AI safety research domains into three types: challenges with creating trustworthy AI systems (Development), challenges with evaluating their risks (Assessment) and challenges with monitoring and intervening after deployment (Control)
arXiv Detail & Related papers (2025-06-25T17:59:50Z) - Enhancing Trust Through Standards: A Comparative Risk-Impact Framework for Aligning ISO AI Standards with Global Ethical and Regulatory Contexts [0.0]
ISO standards aim to foster responsible development by embedding fairness, transparency, and risk management into AI systems.<n>Their effectiveness varies across diverse regulatory landscapes, from the EU's risk-based AI Act to China's stability-focused measures.<n>This paper introduces a novel Comparative Risk-Impact Assessment Framework to evaluate how well ISO standards address ethical risks.
arXiv Detail & Related papers (2025-04-22T00:44:20Z) - Do LLMs trust AI regulation? Emerging behaviour of game-theoretic LLM agents [61.132523071109354]
This paper investigates the interplay between AI developers, regulators and users, modelling their strategic choices under different regulatory scenarios.<n>Our research identifies emerging behaviours of strategic AI agents, which tend to adopt more "pessimistic" stances than pure game-theoretic agents.
arXiv Detail & Related papers (2025-04-11T15:41:21Z) - Cyber Threats in Financial Transactions -- Addressing the Dual Challenge of AI and Quantum Computing [0.0]
Financial sector faces escalating cyber threats amplified by artificial intelligence (AI) and the advent of quantum computing.<n>Report analyzes these threats, relevant frameworks, and possible countermeasures like quantum cryptography.<n>Financial industry must adopt a proactive and adaptive approach to cybersecurity.
arXiv Detail & Related papers (2025-03-19T20:16:27Z) - Envisioning Responsible Quantum Software Engineering and Quantum Artificial Intelligence [7.827152992676682]
The convergence of Quantum Computing (QC), Quantum Software Engineering (QSE), and Artificial Intelligence (AI) presents transformative opportunities across various domains.<n>Existing methodologies inadequately address the ethical, security, and governance challenges arising from this technological shift.<n>We call on the software engineering community to actively shape a future where responsible QSE and QAI are foundational for ethical, accountable, and socially beneficial technological progress.
arXiv Detail & Related papers (2024-10-31T14:26:26Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Securing the Future of GenAI: Policy and Technology [50.586585729683776]
Governments globally are grappling with the challenge of regulating GenAI, balancing innovation against safety.
A workshop co-organized by Google, University of Wisconsin, Madison, and Stanford University aimed to bridge this gap between GenAI policy and technology.
This paper summarizes the discussions during the workshop which addressed questions, such as: How regulation can be designed without hindering technological progress?
arXiv Detail & Related papers (2024-05-21T20:30:01Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Quantum Computing Standards & Accounting Information Systems [0.0]
This paper critically analyzes quantum standards and their transformative effects on the efficiency, expediency, and security of commerce.
The study provides a guide to understanding and navigating the interplay between quantum technology and standard-setting organizations.
arXiv Detail & Related papers (2023-11-15T20:32:27Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - AI Regulation in Europe: From the AI Act to Future Regulatory Challenges [3.0821115746307663]
It argues for a hybrid regulatory strategy that combines elements from both philosophies.
The paper examines the AI Act as a pioneering legislative effort to address the multifaceted challenges posed by AI.
It advocates for immediate action to create protocols for regulated access to high-performance, potentially open-source AI systems.
arXiv Detail & Related papers (2023-10-06T07:52:56Z) - Bridging the Global Divide in AI Regulation: A Proposal for a Contextual, Coherent, and Commensurable Framework [0.9622882291833615]
This paper proposes an alternative contextual, coherent, and commensurable (3C) framework for regulating artificial intelligence (AI)
To ensure contextuality, the framework bifurcates the AI life cycle into two phases: learning and deployment for specific tasks, instead of defining foundation or general-purpose models.
To ensure commensurability, the framework promotes the adoption of international standards for measuring and mitigating risks.
arXiv Detail & Related papers (2023-03-20T15:23:40Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.