From Black Box to Insight: Explainable AI for Extreme Event Preparedness
- URL: http://arxiv.org/abs/2511.13712v1
- Date: Mon, 17 Nov 2025 18:57:15 GMT
- Title: From Black Box to Insight: Explainable AI for Extreme Event Preparedness
- Authors: Kiana Vu, İsmet Selçuk Özer, Phung Lai, Zheng Wu, Thilanka Munasinghe, Jennifer Wei,
- Abstract summary: As climate change accelerates the frequency and severity of extreme events such as wildfires, the need for accurate, explainable, and actionable forecasting becomes increasingly urgent.<n>While artificial intelligence (AI) models have shown promise in predicting such events, their adoption in real-world decision-making remains limited due to their black-box nature.<n>This paper investigates the role of explainable AI (XAI) in bridging the gap between predictive accuracy and actionable insight for extreme event forecasting.
- Score: 6.583192288853322
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As climate change accelerates the frequency and severity of extreme events such as wildfires, the need for accurate, explainable, and actionable forecasting becomes increasingly urgent. While artificial intelligence (AI) models have shown promise in predicting such events, their adoption in real-world decision-making remains limited due to their black-box nature, which limits trust, explainability, and operational readiness. This paper investigates the role of explainable AI (XAI) in bridging the gap between predictive accuracy and actionable insight for extreme event forecasting. Using wildfire prediction as a case study, we evaluate various AI models and employ SHapley Additive exPlanations (SHAP) to uncover key features, decision pathways, and potential biases in model behavior. Our analysis demonstrates how XAI not only clarifies model reasoning but also supports critical decision-making by domain experts and response teams. In addition, we provide supporting visualizations that enhance the interpretability of XAI outputs by contextualizing feature importance and temporal patterns in seasonality and geospatial characteristics. This approach enhances the usability of AI explanations for practitioners and policymakers. Our findings highlight the need for AI systems that are not only accurate but also interpretable, accessible, and trustworthy, essential for effective use in disaster preparedness, risk mitigation, and climate resilience planning.
Related papers
- EWE: An Agentic Framework for Extreme Weather Analysis [61.092871317626496]
Extreme Weather Expert (EWE) is first intelligent agent framework dedicated to this task.<n>EWE emulates expert visualizations through knowledge-guided planning, closed-loop reasoning, and a domain-tailored meteorological toolkit.<n>To catalyze progress, we introduce the first benchmark for this emerging field, comprising a curated dataset of 103 high-impact events.
arXiv Detail & Related papers (2025-11-26T14:37:25Z) - Exploring Energy Landscapes for Minimal Counterfactual Explanations: Applications in Cybersecurity and Beyond [3.6963146054309597]
Counterfactual explanations have emerged as a prominent method in Explainable Artificial Intelligence (XAI)<n>We present a novel framework that integrates perturbation theory and statistical mechanics to generate minimal counterfactual explanations.<n>Our approach systematically identifies the smallest modifications required to change a model's prediction while maintaining plausibility.
arXiv Detail & Related papers (2025-03-23T19:48:37Z) - AI for Extreme Event Modeling and Understanding: Methodologies and Challenges [7.636789744934743]
This paper reviews how AI is being used to analyze extreme events (like floods, droughts, wildfires and heatwaves)
We discuss the hurdles of dealing with limited data, integrating information in real-time, deploying models, and making them understandable.
We emphasize the need for collaboration across different fields to create AI solutions that are practical, understandable, and trustworthy.
arXiv Detail & Related papers (2024-06-28T17:45:25Z) - Towards an end-to-end artificial intelligence driven global weather forecasting system [57.5191940978886]
We present an AI-based data assimilation model, i.e., Adas, for global weather variables.
We demonstrate that Adas can assimilate global observations to produce high-quality analysis, enabling the system operate stably for long term.
We are the first to apply the methods to real-world scenarios, which is more challenging and has considerable practical application potential.
arXiv Detail & Related papers (2023-12-18T09:05:28Z) - Predictable Artificial Intelligence [77.1127726638209]
This paper introduces the ideas and challenges of Predictable AI.<n>It explores the ways in which we can anticipate key validity indicators of present and future AI ecosystems.<n>We argue that achieving predictability is crucial for fostering trust, liability, control, alignment and safety of AI ecosystems.
arXiv Detail & Related papers (2023-10-09T21:36:21Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - What Should I Know? Using Meta-gradient Descent for Predictive Feature
Discovery in a Single Stream of Experience [63.75363908696257]
computational reinforcement learning seeks to construct an agent's perception of the world through predictions of future sensations.
An open challenge in this line of work is determining from the infinitely many predictions that the agent could possibly make which predictions might best support decision-making.
We introduce a meta-gradient descent process by which an agent learns what predictions to make, 2) the estimates for its chosen predictions, and 3) how to use those estimates to generate policies that maximize future reward.
arXiv Detail & Related papers (2022-06-13T21:31:06Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - XAI-KG: knowledge graph to support XAI and decision-making in
manufacturing [0.5872014229110215]
We propose an ontology and knowledge graph to support collecting feedback regarding forecasts, forecast explanations, recommended decision-making options, and user actions.
This way, we provide means to improve forecasting models, explanations, and recommendations of decision-making options.
arXiv Detail & Related papers (2021-05-05T08:42:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.