Digital Sovereignty Control Framework for Military AI-based Cyber Security
- URL: http://arxiv.org/abs/2509.13072v1
- Date: Tue, 16 Sep 2025 13:29:26 GMT
- Title: Digital Sovereignty Control Framework for Military AI-based Cyber Security
- Authors: Clara Maathuis, Kasper Cools,
- Abstract summary: This article aims to define and assess digital sovereign control of data and AI-based models for military cyber security.<n>Grounded on the concepts of digital sovereignty and data sovereignty, the framework aims to protect sensitive defence assets.<n>At the same time, the framework addresses interoperability challenges among allied forces, strategic and legal considerations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In today's evolving threat landscape, ensuring digital sovereignty has become mandatory for military organizations, especially given their increased development and investment in AI-driven cyber security solutions. To this end, a multi-angled framework is proposed in this article in order to define and assess digital sovereign control of data and AI-based models for military cyber security. This framework focuses on aspects such as context, autonomy, stakeholder involvement, and mitigation of risks in this domain. Grounded on the concepts of digital sovereignty and data sovereignty, the framework aims to protect sensitive defence assets against threats such as unauthorized access, ransomware, and supply-chain attacks. This approach reflects the multifaceted nature of digital sovereignty by preserving operational autonomy, assuring security and safety, securing privacy, and fostering ethical compliance of both military systems and decision-makers. At the same time, the framework addresses interoperability challenges among allied forces, strategic and legal considerations, and the integration of emerging technologies by considering a multidisciplinary approach that enhances the resilience and preservation of control over (critical) digital assets. This is done by adopting a design oriented research where systematic literature review is merged with critical thinking and analysis of field incidents in order to assure the effectivity and realism of the framework proposed.
Related papers
- AI Regulation in Telecommunications: A Cross-Jurisdictional Legal Study [0.6117371161379207]
This paper conducts a comparative legal study of policy instruments across ten countries.<n>It examines how telecom, cybersecurity, data protection, and AI laws approach AI-related risks in infrastructure.
arXiv Detail & Related papers (2025-11-27T08:30:12Z) - Neuro-Symbolic AI for Cybersecurity: State of the Art, Challenges, and Opportunities [13.175694396580184]
Neuro-Symbolic (NeSy) AI has emerged with the potential to revolutionize cybersecurity AI.<n>We systematically characterize this field by analyzing 127 publications spanning 2019-July 2025.<n>We show that causal reasoning integration is the most transformative advancement, enabling proactive defense beyond correlation-based approaches.
arXiv Detail & Related papers (2025-09-08T17:33:59Z) - Embodied AI: Emerging Risks and Opportunities for Policy Action [46.48780452120922]
Embodied AI (EAI) systems can exist in, learn from, reason about, and act in the physical world.<n>EAI systems pose significant risks, including physical harm from malicious use, mass surveillance, as well as economic and societal disruption.
arXiv Detail & Related papers (2025-08-28T17:59:07Z) - Never Compromise to Vulnerabilities: A Comprehensive Survey on AI Governance [211.5823259429128]
We propose a comprehensive framework integrating technical and societal dimensions, structured around three interconnected pillars: Intrinsic Security, Derivative Security, and Social Ethics.<n>We identify three core challenges: (1) the generalization gap, where defenses fail against evolving threats; (2) inadequate evaluation protocols that overlook real-world risks; and (3) fragmented regulations leading to inconsistent oversight.<n>Our framework offers actionable guidance for researchers, engineers, and policymakers to develop AI systems that are not only robust and secure but also ethically aligned and publicly trustworthy.
arXiv Detail & Related papers (2025-08-12T09:42:56Z) - AI threats to national security can be countered through an incident regime [55.2480439325792]
We propose a legally mandated post-deployment AI incident regime that aims to counter potential national security threats from AI systems.<n>Our proposed AI incident regime is split into three phases. The first phase revolves around a novel operationalization of what counts as an 'AI incident'<n>The second and third phases spell out that AI providers should notify a government agency about incidents, and that the government agency should be involved in amending AI providers' security and safety procedures.
arXiv Detail & Related papers (2025-03-25T17:51:50Z) - Securing External Deeper-than-black-box GPAI Evaluations [49.1574468325115]
This paper examines the critical challenges and potential solutions for conducting secure and effective external evaluations of general-purpose AI (GPAI) models.<n>With the exponential growth in size, capability, reach and accompanying risk, ensuring accountability, safety, and public trust requires frameworks that go beyond traditional black-box methods.
arXiv Detail & Related papers (2025-03-10T16:13:45Z) - Decoding the Black Box: Integrating Moral Imagination with Technical AI Governance [0.0]
We develop a comprehensive framework designed to regulate AI technologies deployed in high-stakes domains such as defense, finance, healthcare, and education.<n>Our approach combines rigorous technical analysis, quantitative risk assessment, and normative evaluation to expose systemic vulnerabilities.
arXiv Detail & Related papers (2025-03-09T03:11:32Z) - Transforming Cyber Defense: Harnessing Agentic and Frontier AI for Proactive, Ethical Threat Intelligence [0.0]
This manuscript explores how the convergence of agentic AI and Frontier AI is transforming cybersecurity.<n>We examine the roles of real time monitoring, automated incident response, and perpetual learning in forging a resilient, dynamic defense ecosystem.<n>Our vision is to harmonize technological innovation with unwavering ethical oversight, ensuring that future AI driven security solutions uphold core human values of fairness, transparency, and accountability while effectively countering emerging cyber threats.
arXiv Detail & Related papers (2025-02-28T20:23:35Z) - Safety is Essential for Responsible Open-Ended Systems [47.172735322186]
Open-Endedness is the ability of AI systems to continuously and autonomously generate novel and diverse artifacts or solutions.<n>This position paper argues that the inherently dynamic and self-propagating nature of Open-Ended AI introduces significant, underexplored risks.
arXiv Detail & Related papers (2025-02-06T21:32:07Z) - Cyber Shadows: Neutralizing Security Threats with AI and Targeted Policy Measures [0.0]
Cyber threats pose risks at individual, organizational, and societal levels.<n>This paper proposes a comprehensive cybersecurity strategy that integrates AI-driven solutions with targeted policy interventions.
arXiv Detail & Related papers (2025-01-03T09:26:50Z) - AI Risk Management Should Incorporate Both Safety and Security [185.68738503122114]
We argue that stakeholders in AI risk management should be aware of the nuances, synergies, and interplay between safety and security.
We introduce a unified reference framework to clarify the differences and interplay between AI safety and AI security.
arXiv Detail & Related papers (2024-05-29T21:00:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.