Collateral Damage Assessment Model for AI System Target Engagement in Military Operations
- URL: http://arxiv.org/abs/2510.20337v1
- Date: Thu, 23 Oct 2025 08:36:04 GMT
- Title: Collateral Damage Assessment Model for AI System Target Engagement in Military Operations
- Authors: Clara Maathuis, Kasper Cools,
- Abstract summary: The model integrates temporal, spatial, and force dimensions within a unified Knowledge Representation and Reasoning architecture.<n>The model is demonstrated and evaluated through instantiation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In an era where AI (Artificial Intelligence) systems play an increasing role in the battlefield, ensuring responsible targeting demands rigorous assessment of potential collateral effects. In this context, a novel collateral damage assessment model for target engagement of AI systems in military operations is introduced. The model integrates temporal, spatial, and force dimensions within a unified Knowledge Representation and Reasoning (KRR) architecture following a design science methodological approach. Its layered structure captures the categories and architectural components of the AI systems to be engaged together with corresponding engaging vectors and contextual aspects. At the same time, spreading, severity, likelihood, and evaluation metrics are considered in order to provide a clear representation enhanced by transparent reasoning mechanisms. Further, the model is demonstrated and evaluated through instantiation which serves as a basis for further dedicated efforts that aim at building responsible and trustworthy intelligent systems for assessing the effects produced by engaging AI systems in military operations.
Related papers
- Agentic AI for Cybersecurity: A Meta-Cognitive Architecture for Governable Autonomy [0.0]
This paper argues that cybersecurity orchestration should be reconceptualized as an agentic, multi-agent cognitive system.<n>We introduce a conceptual framework in which heterogeneous AI agents responsible for detection, hypothesis formation, contextual interpretation, explanation, and governance are coordinated through an explicit meta-cognitive judgement function.<n>Our contribution is to make this cognitive structure architecturally explicit and governable by embedding meta-cognitive judgement as a first-class system function.
arXiv Detail & Related papers (2026-02-12T12:52:49Z) - From Accuracy to Impact: The Impact-Driven AI Framework (IDAIF) for Aligning Engineering Architecture with Theory of Change [0.40611352512781873]
Impact-Driven AI Framework (IDAIF) is a novel architectural methodology that integrates Theory of Change (ToC) principles with modern artificial intelligence system design.<n>IDAIF addresses the gap by establishing a systematic mapping between ToC's five-stage model and corresponding AI architectural layers.<n>Three case studies demonstrate IDAIF application across healthcare, cybersecurity, and software engineering domains.
arXiv Detail & Related papers (2025-12-09T10:21:02Z) - Fundamentals of Building Autonomous LLM Agents [64.39018305018904]
This paper reviews the architecture and implementation methods of agents powered by large language models (LLMs)<n>The research aims to explore patterns to develop "agentic" LLMs that can automate complex tasks and bridge the performance gap with human capabilities.
arXiv Detail & Related papers (2025-10-10T10:32:39Z) - Creative Adversarial Testing (CAT): A Novel Framework for Evaluating Goal-Oriented Agentic AI Systems [0.0]
Creative Adversarial Testing (CAT) is a novel approach designed to capture and analyze the complex relationship between Agentic AI tasks and the system's intended objectives.<n>We validate the CAT framework through extensive simulation using synthetic interaction data modeled after Alexa+ audio services.<n>Our results demonstrate that the CAT framework provides unprecedented insights into goal-task alignment, enabling more effective optimization and development of Agentic AI systems.
arXiv Detail & Related papers (2025-09-26T23:52:20Z) - A Comprehensive Survey of Self-Evolving AI Agents: A New Paradigm Bridging Foundation Models and Lifelong Agentic Systems [53.37728204835912]
Most existing AI systems rely on manually crafted configurations that remain static after deployment.<n>Recent research has explored agent evolution techniques that aim to automatically enhance agent systems based on interaction data and environmental feedback.<n>This survey aims to provide researchers and practitioners with a systematic understanding of self-evolving AI agents.
arXiv Detail & Related papers (2025-08-10T16:07:32Z) - Rethinking Technological Readiness in the Era of AI Uncertainty [0.0]
We argue that current technology readiness assessments fail to capture critical AI-specific factors.<n>We propose a new AI Readiness Framework to evaluate the maturity and trustworthiness of AI components in military systems.
arXiv Detail & Related papers (2025-04-15T14:09:50Z) - Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics [68.36528819227641]
This paper systematically evaluates the robustness of Vision-Language-Action (VLA) models.<n>We introduce two untargeted attack objectives that leverage spatial foundations to destabilize robotic actions, and a targeted attack objective that manipulates the robotic trajectory.<n>We design an adversarial patch generation approach that places a small, colorful patch within the camera's view, effectively executing the attack in both digital and physical environments.
arXiv Detail & Related papers (2024-11-18T01:52:20Z) - EARBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [53.717918131568936]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.<n>Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.<n>However, the deployment of these agents in physical environments presents significant safety challenges.<n>This study introduces EARBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Levels of AGI for Operationalizing Progress on the Path to AGI [53.28828093836034]
We propose a framework for classifying the capabilities and behavior of Artificial General Intelligence (AGI) models and their precursors.<n>This framework introduces levels of AGI performance, generality, and autonomy, providing a common language to compare models, assess risks, and measure progress along the path to AGI.
arXiv Detail & Related papers (2023-11-04T17:44:58Z) - Towards an Interface Description Template for AI-enabled Systems [77.34726150561087]
Reuse is a common system architecture approach that seeks to instantiate a system architecture with existing components.
There is currently no framework that guides the selection of necessary information to assess their portability to operate in a system different than the one for which the component was originally purposed.
We present ongoing work on establishing an interface description template that captures the main information of an AI-enabled component.
arXiv Detail & Related papers (2020-07-13T20:30:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.