Towards Probabilistic Causal Discovery, Inference & Explanations for
Autonomous Drones in Mine Surveying Tasks
- URL: http://arxiv.org/abs/2308.10047v2
- Date: Sun, 1 Oct 2023 14:42:17 GMT
- Title: Towards Probabilistic Causal Discovery, Inference & Explanations for
Autonomous Drones in Mine Surveying Tasks
- Authors: Ricardo Cannizzaro, Rhys Howard, Paulina Lewinska, Lars Kunze
- Abstract summary: Causal modelling can aid autonomous agents in making decisions and explaining outcomes.
Here we identify challenges relating to causality in the context of a drone system operating in a salt mine.
We propose a probabilistic causal framework consisting of: causally-informed POMDP planning, online SCM adaptation, and post-hoc counterfactual explanations.
- Score: 5.569226615350014
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Causal modelling offers great potential to provide autonomous agents the
ability to understand the data-generation process that governs their
interactions with the world. Such models capture formal knowledge as well as
probabilistic representations of noise and uncertainty typically encountered by
autonomous robots in real-world environments. Thus, causality can aid
autonomous agents in making decisions and explaining outcomes, but deploying
causality in such a manner introduces new challenges. Here we identify
challenges relating to causality in the context of a drone system operating in
a salt mine. Such environments are challenging for autonomous agents because of
the presence of confounders, non-stationarity, and a difficulty in building
complete causal models ahead of time. To address these issues, we propose a
probabilistic causal framework consisting of: causally-informed POMDP planning,
online SCM adaptation, and post-hoc counterfactual explanations. Further, we
outline planned experimentation to evaluate the framework integrated with a
drone system in simulated mine environments and on a real-world mine dataset.
Related papers
- Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Exploring the Causality of End-to-End Autonomous Driving [57.631400236930375]
We propose a comprehensive approach to explore and analyze the causality of end-to-end autonomous driving.
Our work is the first to unveil the mystery of end-to-end autonomous driving and turn the black box into a white one.
arXiv Detail & Related papers (2024-07-09T04:56:11Z) - Hallucination Detection in Foundation Models for Decision-Making: A Flexible Definition and Review of the State of the Art [7.072820266877787]
We discuss the current use cases of foundation models for decision-making tasks.
We argue there is a need to step back and simultaneously design systems that can quantify the certainty of a model's decision.
arXiv Detail & Related papers (2024-03-25T08:11:02Z) - Gaussian Mixture Models for Affordance Learning using Bayesian Networks [50.18477618198277]
Affordances are fundamental descriptors of relationships between actions, objects and effects.
This paper approaches the problem of an embodied agent exploring the world and learning these affordances autonomously from its sensory experiences.
arXiv Detail & Related papers (2024-02-08T22:05:45Z) - The Essential Role of Causality in Foundation World Models for Embodied AI [102.75402420915965]
Embodied AI agents will require the ability to perform new tasks in many different real-world environments.
Current foundation models fail to accurately model physical interactions and are therefore insufficient for Embodied AI.
The study of causality lends itself to the construction of veridical world models.
arXiv Detail & Related papers (2024-02-06T17:15:33Z) - Towards a Causal Probabilistic Framework for Prediction,
Action-Selection & Explanations for Robot Block-Stacking Tasks [4.244706520140677]
Causal models provide a principled framework to encode formal knowledge of the causal relationships that govern the robot's interaction with its environment.
We propose a novel causal probabilistic framework to embed a physics simulation capability into a structural causal model to permit robots to perceive and assess the current state of a block-stacking task.
arXiv Detail & Related papers (2023-08-11T15:58:15Z) - Modeling Transformative AI Risks (MTAIR) Project -- Summary Report [0.0]
This report builds on an earlier diagram by Cottier and Shah which laid out some of the crucial disagreements ("cruxes") visually, with some explanation.
The model starts with a discussion of reasoning via analogies and general prior beliefs about artificial intelligence.
It lays out a model of different paths and enabling technologies for high-level machine intelligence, and a model of how advances in the capabilities of these systems might proceed.
The model also looks specifically at the question of learned optimization, and whether machine learning systems will create mesa-optimizers.
arXiv Detail & Related papers (2022-06-19T09:11:23Z) - CausalCity: Complex Simulations with Agency for Causal Discovery and
Reasoning [68.74447489372037]
We present a high-fidelity simulation environment that is designed for developing algorithms for causal discovery and counterfactual reasoning.
A core component of our work is to introduce textitagency, such that it is simple to define and create complex scenarios.
We perform experiments with three state-of-the-art methods to create baselines and highlight the affordances of this environment.
arXiv Detail & Related papers (2021-06-25T00:21:41Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Adaptive Autonomy in Human-on-the-Loop Vision-Based Robotics Systems [16.609594839630883]
Computer vision approaches are widely used by autonomous robotic systems to guide their decision making.
High accuracy is critical, particularly for Human-on-the-loop (HoTL) systems where humans play only a supervisory role.
We propose a solution based upon adaptive autonomy levels, whereby the system detects loss of reliability of these models.
arXiv Detail & Related papers (2021-03-28T05:43:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.