Structured AI Decision-Making in Disaster Management
- URL: http://arxiv.org/abs/2509.01576v1
- Date: Mon, 01 Sep 2025 16:04:21 GMT
- Title: Structured AI Decision-Making in Disaster Management
- Authors: Julian Gerald Dcruz, Argyrios Zolotas, Niall Ross Greenwood, Miguel Arana-Catania,
- Abstract summary: This paper contributes to addressing the challenge of decision-making by proposing a structured decision-making framework.<n>The proposed structured decision-making framework is implemented in autonomous decision-making, specifically within disaster management.<n>The results demonstrate that the structured decision-making framework achieves 60.94% greater stability in consistently accurate decisions.
- Score: 0.31109959052829045
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With artificial intelligence (AI) being applied to bring autonomy to decision-making in safety-critical domains such as the ones typified in the aerospace and emergency-response services, there has been a call to address the ethical implications of structuring those decisions, so they remain reliable and justifiable when human lives are at stake. This paper contributes to addressing the challenge of decision-making by proposing a structured decision-making framework as a foundational step towards responsible AI. The proposed structured decision-making framework is implemented in autonomous decision-making, specifically within disaster management. By introducing concepts of Enabler agents, Levels and Scenarios, the proposed framework's performance is evaluated against systems relying solely on judgement-based insights, as well as human operators who have disaster experience: victims, volunteers, and stakeholders. The results demonstrate that the structured decision-making framework achieves 60.94% greater stability in consistently accurate decisions across multiple Scenarios, compared to judgement-based systems. Moreover, the study shows that the proposed framework outperforms human operators with a 38.93% higher accuracy across various Scenarios. These findings demonstrate the promise of the structured decision-making framework for building more reliable autonomous AI applications in safety-critical contexts.
Related papers
- Generative AI in Managerial Decision-Making: Redefining Boundaries through Ambiguity Resolution and Sycophancy Analysis [0.45880283710344055]
This study compares various models on ambiguity detection, evaluating how a systematic resolution process enhances response quality.<n>Using a novel four-dimensional business ambiguity taxonomy, we conducted a human-in-the-loop experiment across strategic, tactical, and operational scenarios.<n>Results reveal distinct performance capabilities. While models excel in detecting internal contradictions and contextual ambiguities, they struggle with structural linguistic nuances.
arXiv Detail & Related papers (2026-03-04T12:10:56Z) - Agentic AI for Commercial Insurance Underwriting with Adversarial Self-Critique [0.0]
This study presents a decision-negative, human-in-the-loop agentic system that incorporates an adversarial self-critique mechanism.<n>Within this system, a critic agent challenges the primary agent's conclusions prior to submitting recommendations to human reviewers.<n>The research develops a formal taxonomy of failure modes to characterize potential errors by decision-negative agents.
arXiv Detail & Related papers (2026-01-21T05:51:27Z) - ClearFairy: Capturing Creative Workflows through Decision Structuring, In-Situ Questioning, and Rationale Inference [59.65947911667229]
We present CLEAR framework that structures reasoning into cognitive decision steps-linked units of actions, artifacts, and self-explanations.<n>We introduce ClearFairy, a think-aloud AI assistant for UI design that detects weak explanations, asks lightweight clarifying questions, and infers missing rationales to ease the knowledge-sharing burden.
arXiv Detail & Related papers (2025-09-18T02:11:34Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - ABI Approach: Automatic Bias Identification in Decision-Making Under Risk based in an Ontology of Behavioral Economics [46.57327530703435]
Risk seeking preferences for losses, driven by biases such as loss aversion, pose challenges and can result in severe negative consequences.
This research introduces the ABI approach, a novel solution designed to support organizational decision-makers by automatically identifying and explaining risk seeking preferences.
arXiv Detail & Related papers (2024-05-22T23:53:46Z) - Bridging the gap: Towards an Expanded Toolkit for AI-driven Decision-Making in the Public Sector [6.693502127460251]
AI-driven decision-making systems are becoming instrumental in the public sector, with applications spanning areas like criminal justice, social welfare, financial fraud detection, and public health.
These systems face the challenge of aligning machine learning (ML) models with the complex realities of public sector decision-making.
We examine five key challenges where misalignment can occur, including distribution shifts, label bias, the influence of past decision-making on the data side, as well as competing objectives and human-in-the-loop on the model output side.
arXiv Detail & Related papers (2023-10-29T17:44:48Z) - On strategies for risk management and decision making under uncertainty shared across multiple fields [55.2480439325792]
The paper finds more than 110 examples of such strategies and this approach to risk is termed RDOT: Risk-reducing Design and Operations Toolkit.<n>RDOT strategies fall into six broad categories: structural, reactive, formal, adversarial, multi-stage and positive.<n>Overall, RDOT represents an overlooked class of versatile responses to uncertainty.
arXiv Detail & Related papers (2023-09-06T16:14:32Z) - Rational Decision-Making Agent with Internalized Utility Judgment [88.01612847081677]
Large language models (LLMs) have demonstrated remarkable advancements and have attracted significant efforts to develop LLMs into agents capable of executing intricate multi-step decision-making tasks beyond traditional NLP applications.<n>This paper proposes RadAgent, which fosters the development of its rationality through an iterative framework involving Experience Exploration and Utility Learning.<n> Experimental results on the ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving over 10% improvement in Pass Rate on diverse tasks.
arXiv Detail & Related papers (2023-08-24T03:11:45Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - A Decision Model for Decentralized Autonomous Organization Platform
Selection: Three Industry Case Studies [0.0]
Decentralized autonomous organizations as a new form of online governance arecollections of smart contracts deployed on a blockchain platform.
This study presents a decision model as a Multi-Criteria Decision-Making problem for the decentralized autonomous organization platform selection problem.
arXiv Detail & Related papers (2021-07-07T10:05:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.