Towards an AI Observatory for the Nuclear Sector: A tool for anticipatory governance
- URL: http://arxiv.org/abs/2504.12358v1
- Date: Wed, 16 Apr 2025 03:43:15 GMT
- Title: Towards an AI Observatory for the Nuclear Sector: A tool for anticipatory governance
- Authors: Aditi Verma, Elizabeth Williams,
- Abstract summary: We call for the creation of an anticipatory system of governance for AI in the nuclear sector.<n>The paper explores the contours of the nuclear AI observatory and an anticipatory system of governance.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: AI models are rapidly becoming embedded in all aspects of nuclear energy research and work but the safety, security, and safeguards consequences of this embedding are not well understood. In this paper, we call for the creation of an anticipatory system of governance for AI in the nuclear sector as well as the creation of a global AI observatory as a means for operationalizing anticipatory governance. The paper explores the contours of the nuclear AI observatory and an anticipatory system of governance by drawing on work in science and technology studies, public policy, and foresight studies.
Related papers
- The Nuclear Analogy in AI Governance Research [0.0]
The analogy between Artificial Intelligence (AI) and nuclear weapons is prominent in academic and policy discourse on AI governance.<n>This chapter reviews 43 scholarly works which explicitly draw on the nuclear domain to derive lessons for AI governance.
arXiv Detail & Related papers (2025-10-24T07:09:50Z) - Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance [211.5823259429128]
We propose a comprehensive framework integrating technical and societal dimensions, structured around three interconnected pillars: Intrinsic Security, Derivative Security, and Social Ethics.<n>We identify three core challenges: (1) the generalization gap, where defenses fail against evolving threats; (2) inadequate evaluation protocols that overlook real-world risks; and (3) fragmented regulations leading to inconsistent oversight.<n>Our framework offers actionable guidance for researchers, engineers, and policymakers to develop AI systems that are not only robust and secure but also ethically aligned and publicly trustworthy.
arXiv Detail & Related papers (2025-08-12T09:42:56Z) - A New Perspective On AI Safety Through Control Theory Methodologies [16.51699616065134]
AI promises to achieve a new level of autonomy but is hampered by a lack of safety assurance.<n>This article outlines a new perspective on AI safety based on an interdisciplinary interpretation of the underlying data-generation process.<n>The new perspective, also referred to as data control, aims to stimulate AI engineering to take advantage of existing safety analysis and assurance.
arXiv Detail & Related papers (2025-06-30T10:26:59Z) - Report on NSF Workshop on Science of Safe AI [75.96202715567088]
New advances in machine learning are leading to new opportunities to develop technology-based solutions to societal problems.<n>To fulfill the promise of AI, we must address how to develop AI-based systems that are accurate and performant but also safe and trustworthy.<n>This report is the result of the discussions in the working groups that addressed different aspects of safety at the workshop.
arXiv Detail & Related papers (2025-06-24T18:55:29Z) - Enabling External Scrutiny of AI Systems with Privacy-Enhancing Technologies [0.0]
This article describes how technical infrastructure developed by the nonprofit OpenMined enables external scrutiny of AI systems without compromising sensitive information.<n>In practice, external researchers have struggled to gain access to AI systems because of AI companies' legitimate concerns about security, privacy, and intellectual property.<n>PETs have reached a new level of maturity: end-to-end technical infrastructure developed by OpenMined combines several PETs into various setups that enable privacy-preserving audits of AI systems.
arXiv Detail & Related papers (2025-02-05T15:31:11Z) - Securing the AI Frontier: Urgent Ethical and Regulatory Imperatives for AI-Driven Cybersecurity [0.0]
This paper critically examines the evolving ethical and regulatory challenges posed by the integration of artificial intelligence in cybersecurity.<n>We trace the historical development of AI regulation, highlighting major milestones from theoretical discussions in the 1940s to the implementation of recent global frameworks such as the European Union AI Act.<n>Ethical concerns such as bias, transparency, accountability, privacy, and human oversight are explored in depth, along with their implications for AI-driven cybersecurity systems.
arXiv Detail & Related papers (2025-01-15T18:17:37Z) - Strategic AI Governance: Insights from Leading Nations [0.0]
Artificial Intelligence (AI) has the potential to revolutionize various sectors, yet its adoption is often hindered by concerns about data privacy, security, and the understanding of AI capabilities.
This paper synthesizes AI governance approaches, strategic themes, and enablers and challenges for AI adoption by reviewing national AI strategies from leading nations.
arXiv Detail & Related papers (2024-09-16T06:00:42Z) - Open Problems in Technical AI Governance [102.19067750759471]
Technical AI governance refers to technical analysis and tools for supporting the effective governance of AI.<n>This paper is intended as a resource for technical researchers or research funders looking to contribute to AI governance.
arXiv Detail & Related papers (2024-07-20T21:13:56Z) - Responsible AI for Earth Observation [10.380878519901998]
We systematically define the intersection of AI and EO, with a central focus on responsible AI practices.
We identify several critical components guiding this exploration from both academia and industry perspectives.
The paper explores potential opportunities and emerging trends, providing valuable insights for future research endeavors.
arXiv Detail & Related papers (2024-05-31T14:47:27Z) - Networking Systems for Video Anomaly Detection: A Tutorial and Survey [55.28514053969056]
Video Anomaly Detection (VAD) is a fundamental research task within the Artificial Intelligence (AI) community.
With the advancements in deep learning and edge computing, VAD has made significant progress.
This article offers an exhaustive tutorial for novices in NSVAD.
arXiv Detail & Related papers (2024-05-16T02:00:44Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Predictable Artificial Intelligence [77.1127726638209]
This paper introduces the ideas and challenges of Predictable AI.<n>It explores the ways in which we can anticipate key validity indicators of present and future AI ecosystems.<n>We argue that achieving predictability is crucial for fostering trust, liability, control, alignment and safety of AI ecosystems.
arXiv Detail & Related papers (2023-10-09T21:36:21Z) - A method for ethical AI in Defence: A case study on developing
trustworthy autonomous systems [0.0]
We describe a case study of building a trusted autonomous system - Athena AI - within an industry-led, government-funded project with diverse collaborators and stakeholders.
We draw out lessons on the value and impact of embedding responsible research and innovation-aligned, ethics-by-design approaches.
arXiv Detail & Related papers (2022-06-21T23:17:17Z) - Trustworthy AI Inference Systems: An Industry Research View [58.000323504158054]
We provide an industry research view for approaching the design, deployment, and operation of trustworthy AI inference systems.
We highlight opportunities and challenges in AI systems using trusted execution environments.
We outline areas of further development that require the global collective attention of industry, academia, and government researchers.
arXiv Detail & Related papers (2020-08-10T23:05:55Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.