Causal versus Marginal Shapley Values for Robotic Lever Manipulation
Controlled using Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2111.02936v1
- Date: Thu, 4 Nov 2021 15:16:21 GMT
- Title: Causal versus Marginal Shapley Values for Robotic Lever Manipulation
Controlled using Deep Reinforcement Learning
- Authors: Sindre Benjamin Remman, Inga Str\"umke and Anastasios M. Lekkas
- Abstract summary: We investigate the effect of including domain knowledge about a robotic system's causal relations when generating explanations.
We show that enabling an explanation method to account for indirect effects and incorporating some domain knowledge can lead to explanations that better agree with human intuition.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We investigate the effect of including domain knowledge about a robotic
system's causal relations when generating explanations. To this end, we compare
two methods from explainable artificial intelligence, the popular KernelSHAP
and the recent causal SHAP, on a deep neural network trained using deep
reinforcement learning on the task of controlling a lever using a robotic
manipulator. A primary disadvantage of KernelSHAP is that its explanations
represent only the features' direct effects on a model's output, not
considering the indirect effects a feature can have on the output by affecting
other features. Causal SHAP uses a partial causal ordering to alter
KernelSHAP's sampling procedure to incorporate these indirect effects. This
partial causal ordering defines the causal relations between the features, and
we specify this using domain knowledge about the lever control task. We show
that enabling an explanation method to account for indirect effects and
incorporating some domain knowledge can lead to explanations that better agree
with human intuition. This is especially favorable for a real-world robotics
task, where there is considerable causality at play, and in addition, the
required domain knowledge is often handily available.
Related papers
- Learning Low-Level Causal Relations using a Simulated Robotic Arm [1.474723404975345]
Causal learning allows humans to predict the effect of their actions on the known environment.
We study causal relations by learning the forward and inverse models based on data generated by a simulated robotic arm.
arXiv Detail & Related papers (2024-10-10T09:28:30Z) - Optimal Causal Representations and the Causal Information Bottleneck [0.19799527196428243]
The Information Bottleneck (IB) method is a widely used approach in representation learning.
Traditional methods like IB are purely statistical and ignore underlying causal structures, making them ill-suited for causal tasks.
We propose the Causal Information Bottleneck (CIB), a causal extension of the IB, which compresses a set of chosen variables while maintaining causal control over a target variable.
arXiv Detail & Related papers (2024-10-01T09:21:29Z) - Multi-modal Causal Structure Learning and Root Cause Analysis [67.67578590390907]
We propose Mulan, a unified multi-modal causal structure learning method for root cause localization.
We leverage a log-tailored language model to facilitate log representation learning, converting log sequences into time-series data.
We also introduce a novel key performance indicator-aware attention mechanism for assessing modality reliability and co-learning a final causal graph.
arXiv Detail & Related papers (2024-02-04T05:50:38Z) - DOMINO: Visual Causal Reasoning with Time-Dependent Phenomena [59.291745595756346]
We propose a set of visual analytics methods that allow humans to participate in the discovery of causal relations associated with windows of time delay.
Specifically, we leverage a well-established method, logic-based causality, to enable analysts to test the significance of potential causes.
Since an effect can be a cause of other effects, we allow users to aggregate different temporal cause-effect relations found with our method into a visual flow diagram.
arXiv Detail & Related papers (2023-03-12T03:40:21Z) - Causal Structure Learning with Recommendation System [46.90516308311924]
We first formulate the underlying causal mechanism as a causal structural model and describe a general causal structure learning framework grounded in the real-world working mechanism of recommendation systems.
We then derive the learning objective from our framework and propose an augmented Lagrangian solver for efficient optimization.
arXiv Detail & Related papers (2022-10-19T02:31:47Z) - Systematic Evaluation of Causal Discovery in Visual Model Based
Reinforcement Learning [76.00395335702572]
A central goal for AI and causality is the joint discovery of abstract representations and causal structure.
Existing environments for studying causal induction are poorly suited for this objective because they have complicated task-specific causal graphs.
In this work, our goal is to facilitate research in learning representations of high-level variables as well as causal structures among them.
arXiv Detail & Related papers (2021-07-02T05:44:56Z) - Everything Has a Cause: Leveraging Causal Inference in Legal Text
Analysis [62.44432226563088]
Causal inference is the process of capturing cause-effect relationship among variables.
We propose a novel Graph-based Causal Inference framework, which builds causal graphs from fact descriptions without much human involvement.
We observe that the causal knowledge contained in GCI can be effectively injected into powerful neural networks for better performance and interpretability.
arXiv Detail & Related papers (2021-04-19T16:13:10Z) - A Survey on Extraction of Causal Relations from Natural Language Text [9.317718453037667]
Cause-effect relations appear frequently in text, and curating cause-effect relations from text helps in building causal networks for predictive tasks.
Existing causality extraction techniques include knowledge-based, statistical machine learning(ML)-based, and deep learning-based approaches.
arXiv Detail & Related papers (2021-01-16T10:49:39Z) - Weakly Supervised Disentangled Generative Causal Representation Learning [21.392372783459013]
We show that previous methods with independent priors fail to disentangle causally related factors even under supervision.
We propose a new disentangled learning method that enables causal controllable generation and causal representation learning.
arXiv Detail & Related papers (2020-10-06T11:38:41Z) - Self-Attention Attribution: Interpreting Information Interactions Inside
Transformer [89.21584915290319]
We propose a self-attention attribution method to interpret the information interactions inside Transformer.
We show that the attribution results can be used as adversarial patterns to implement non-targeted attacks towards BERT.
arXiv Detail & Related papers (2020-04-23T14:58:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.