Towards a Grounded Theory of Causation for Embodied AI
- URL: http://arxiv.org/abs/2206.13973v1
- Date: Tue, 28 Jun 2022 12:56:43 GMT
- Title: Towards a Grounded Theory of Causation for Embodied AI
- Authors: Taco Cohen
- Abstract summary: Existing frameworks give no indication as to which behaviour policies or physical transformations of state space shall count as interventions.
The framework sketched in this paper describes actions as transformations of state space, for instance induced by an agent running a policy.
This makes it possible to describe in a uniform way both transformations of the micro-state space and abstract models thereof.
- Score: 12.259552039796027
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There exist well-developed frameworks for causal modelling, but these require
rather a lot of human domain expertise to define causal variables and perform
interventions. In order to enable autonomous agents to learn abstract causal
models through interactive experience, the existing theoretical foundations
need to be extended and clarified. Existing frameworks give no guidance
regarding variable choice / representation, and more importantly, give no
indication as to which behaviour policies or physical transformations of state
space shall count as interventions. The framework sketched in this paper
describes actions as transformations of state space, for instance induced by an
agent running a policy. This makes it possible to describe in a uniform way
both transformations of the micro-state space and abstract models thereof, and
say when the latter is veridical / grounded / natural. We then introduce
(causal) variables, define a mechanism as an invariant predictor, and say when
an action can be viewed as a ``surgical intervention'', thus bringing the
objective of causal representation & intervention skill learning into clearer
focus.
Related papers
- Sequential Representation Learning via Static-Dynamic Conditional Disentanglement [58.19137637859017]
This paper explores self-supervised disentangled representation learning within sequential data, focusing on separating time-independent and time-varying factors in videos.
We propose a new model that breaks the usual independence assumption between those factors by explicitly accounting for the causal relationship between the static/dynamic variables.
Experiments show that the proposed approach outperforms previous complex state-of-the-art techniques in scenarios where the dynamics of a scene are influenced by its content.
arXiv Detail & Related papers (2024-08-10T17:04:39Z) - Hierarchical Invariance for Robust and Interpretable Vision Tasks at Larger Scales [54.78115855552886]
We show how to construct over-complete invariants with a Convolutional Neural Networks (CNN)-like hierarchical architecture.
With the over-completeness, discriminative features w.r.t. the task can be adaptively formed in a Neural Architecture Search (NAS)-like manner.
For robust and interpretable vision tasks at larger scales, hierarchical invariant representation can be considered as an effective alternative to traditional CNN and invariants.
arXiv Detail & Related papers (2024-02-23T16:50:07Z) - Inverse Decision Modeling: Learning Interpretable Representations of
Behavior [72.80902932543474]
We develop an expressive, unifying perspective on inverse decision modeling.
We use this to formalize the inverse problem (as a descriptive model)
We illustrate how this structure enables learning (interpretable) representations of (bounded) rationality.
arXiv Detail & Related papers (2023-10-28T05:05:01Z) - Interpretable Imitation Learning with Dynamic Causal Relations [65.18456572421702]
We propose to expose captured knowledge in the form of a directed acyclic causal graph.
We also design this causal discovery process to be state-dependent, enabling it to model the dynamics in latent causal graphs.
The proposed framework is composed of three parts: a dynamic causal discovery module, a causality encoding module, and a prediction module, and is trained in an end-to-end manner.
arXiv Detail & Related papers (2023-09-30T20:59:42Z) - On the Interventional Kullback-Leibler Divergence [11.57430292133273]
We introduce the Interventional Kullback-Leibler divergence to quantify both structural and distributional differences between causal models.
We propose a sufficient condition on the intervention targets to identify subsets of observed variables on which the models provably agree or disagree.
arXiv Detail & Related papers (2023-02-10T17:03:29Z) - Emergent Causality and the Foundation of Consciousness [0.0]
We argue that in the absence of a $do$ operator, an intervention can be represented by a variable.
In a narrow sense this describes what it is to be aware, and is a mechanistic explanation of aspects of consciousness.
arXiv Detail & Related papers (2023-02-07T01:41:23Z) - Discovering Latent Causal Variables via Mechanism Sparsity: A New
Principle for Nonlinear ICA [81.4991350761909]
Independent component analysis (ICA) refers to an ensemble of methods which formalize this goal and provide estimation procedure for practical application.
We show that the latent variables can be recovered up to a permutation if one regularizes the latent mechanisms to be sparse.
arXiv Detail & Related papers (2021-07-21T14:22:14Z) - Systematic Evaluation of Causal Discovery in Visual Model Based
Reinforcement Learning [76.00395335702572]
A central goal for AI and causality is the joint discovery of abstract representations and causal structure.
Existing environments for studying causal induction are poorly suited for this objective because they have complicated task-specific causal graphs.
In this work, our goal is to facilitate research in learning representations of high-level variables as well as causal structures among them.
arXiv Detail & Related papers (2021-07-02T05:44:56Z) - Feature-Based Interpretable Reinforcement Learning based on
State-Transition Models [3.883460584034766]
Growing concerns regarding the operational usage of AI models in the real-world has caused a surge of interest in explaining AI models' decisions to humans.
We propose a method for offering local explanations on risk in reinforcement learning.
arXiv Detail & Related papers (2021-05-14T23:43:11Z) - Models we Can Trust: Toward a Systematic Discipline of (Agent-Based)
Model Interpretation and Validation [0.0]
We advocate the development of a discipline of interacting with and extracting information from models.
We outline some directions for the development of a such a discipline.
arXiv Detail & Related papers (2021-02-23T10:52:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.