Human-AI Use Patterns for Decision-Making in Disaster Scenarios: A Systematic Review
- URL: http://arxiv.org/abs/2509.12034v1
- Date: Mon, 15 Sep 2025 15:18:49 GMT
- Title: Human-AI Use Patterns for Decision-Making in Disaster Scenarios: A Systematic Review
- Authors: Emmanuel Adjei Domfeh, Christopher L. Dancy,
- Abstract summary: We identify four major categories: Human-AI Decision Support Systems, Task and Resource Coordination, Trust and Transparency, and Simulation and Training.<n>Within these, we analyze sub-patterns such as cognitive-augmented intelligence, multi-agent coordination, explainable AI, and virtual training environments.<n>Our review highlights how AI systems may enhance situational awareness, improve response efficiency, and support complex decision-making, while also surfacing critical limitations in scalability, interpretability, and system interoperability.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In high-stakes disaster scenarios, timely and informed decision-making is critical yet often challenged by uncertainty, dynamic environments, and limited resources. This paper presents a systematic review of Human-AI collaboration patterns that support decision-making across all disaster management phases. Drawing from 51 peer-reviewed studies, we identify four major categories: Human-AI Decision Support Systems, Task and Resource Coordination, Trust and Transparency, and Simulation and Training. Within these, we analyze sub-patterns such as cognitive-augmented intelligence, multi-agent coordination, explainable AI, and virtual training environments. Our review highlights how AI systems may enhance situational awareness, improves response efficiency, and support complex decision-making, while also surfacing critical limitations in scalability, interpretability, and system interoperability. We conclude by outlining key challenges and future research directions, emphasizing the need for adaptive, trustworthy, and context-aware Human-AI systems to improve disaster resilience and equitable recovery outcomes.
Related papers
- AI Deception: Risks, Dynamics, and Controls [153.71048309527225]
This project provides a comprehensive and up-to-date overview of the AI deception field.<n>We identify a formal definition of AI deception, grounded in signaling theory from studies of animal deception.<n>We organize the landscape of AI deception research as a deception cycle, consisting of two key components: deception emergence and deception treatment.
arXiv Detail & Related papers (2025-11-27T16:56:04Z) - Disaster Management in the Era of Agentic AI Systems: A Vision for Collective Human-Machine Intelligence for Augmented Resilience [13.155469498169866]
Disaster Copilot is a vision for a multi-agent artificial intelligence system designed to overcome these systemic challenges.<n>The proposed architecture utilizes a central orchestrator to coordinate diverse sub-agents, each specializing in critical domains such as predictive risk analytics, situational awareness, and impact assessment.
arXiv Detail & Related papers (2025-10-16T02:02:10Z) - An Outlook on the Opportunities and Challenges of Multi-Agent AI Systems [32.48561526824382]
A multi-agent AI system (MAS) is composed of multiple autonomous agents that interact, exchange information, and make decisions based on internal generative models.<n>This paper outlines a formal framework for analyzing MAS, focusing on two core aspects: effectiveness and safety.
arXiv Detail & Related papers (2025-05-23T22:05:19Z) - Towards Foundation-model-based Multiagent System to Accelerate AI for Social Impact [37.72844862625008]
Existing AI4SI research is often labor-intensive and resource-demanding.<n>We propose a novel meta-level multi-agent system designed to accelerate the development of such base-level systems.
arXiv Detail & Related papers (2024-12-10T19:29:34Z) - Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We examine what is known about human wisdom and sketch a vision of its AI counterpart.<n>We argue that AI systems particularly struggle with metacognition.<n>We discuss how wise AI might be benchmarked, trained, and implemented.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - AI-Driven Human-Autonomy Teaming in Tactical Operations: Proposed Framework, Challenges, and Future Directions [10.16399860867284]
Artificial Intelligence (AI) techniques are transforming tactical operations by augmenting human decision-making capabilities.
This paper explores AI-driven Human-Autonomy Teaming (HAT) as a transformative approach.
We propose a comprehensive framework that addresses the key components of AI-driven HAT.
arXiv Detail & Related papers (2024-10-28T15:05:16Z) - How Performance Pressure Influences AI-Assisted Decision Making [52.997197698288936]
We show how pressure and explainable AI (XAI) techniques interact with AI advice-taking behavior.<n>Our results show complex interaction effects, with different combinations of pressure and XAI techniques either improving or worsening AI advice taking behavior.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - To Err Is AI! Debugging as an Intervention to Facilitate Appropriate Reliance on AI Systems [11.690126756498223]
Vision for optimal human-AI collaboration requires 'appropriate reliance' of humans on AI systems.
In practice, the performance disparity of machine learning models on out-of-distribution data makes dataset-specific performance feedback unreliable.
arXiv Detail & Related papers (2024-09-22T09:43:27Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Optimising Human-AI Collaboration by Learning Convincing Explanations [62.81395661556852]
We propose a method for a collaborative system that remains safe by having a human making decisions.
Ardent enables efficient and effective decision-making by adapting to individual preferences for explanations.
arXiv Detail & Related papers (2023-11-13T16:00:16Z) - Predictable Artificial Intelligence [77.1127726638209]
This paper introduces the ideas and challenges of Predictable AI.<n>It explores the ways in which we can anticipate key validity indicators of present and future AI ecosystems.<n>We argue that achieving predictability is crucial for fostering trust, liability, control, alignment and safety of AI ecosystems.
arXiv Detail & Related papers (2023-10-09T21:36:21Z) - Multi-AI Complex Systems in Humanitarian Response [0.0]
We describe how multi-AI systems can arise, even in relatively simple real-world humanitarian response scenarios, and lead to potentially emergent and erratic erroneous behavior.
This paper is designed to be a first exposition on this topic in the field of humanitarian response, raising awareness, exploring the possible landscape of this domain, and providing a starting point for future work within the wider community.
arXiv Detail & Related papers (2022-08-24T03:01:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.