Observing Interventions: A logic for thinking about experiments
- URL: http://arxiv.org/abs/2111.12978v1
- Date: Thu, 25 Nov 2021 09:26:45 GMT
- Title: Observing Interventions: A logic for thinking about experiments
- Authors: Fausto Barbero, Katrin Schulz, Fernando R. Vel\'azquez-Quesada, Kaibo
Xie
- Abstract summary: This paper makes a first step towards a logic of learning from experiments.
Crucial for our approach is the idea that the notion of an intervention can be used as a formal expression of a (real or hypothetical) experiment.
For all the proposed logical systems, we provide a sound and complete axiomatization.
- Score: 62.997667081978825
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper makes a first step towards a logic of learning from experiments.
For this, we investigate formal frameworks for modeling the interaction of
causal and (qualitative) epistemic reasoning. Crucial for our approach is the
idea that the notion of an intervention can be used as a formal expression of a
(real or hypothetical) experiment. In a first step we extend the well-known
causal models with a simple Hintikka-style representation of the epistemic
state of an agent. In the resulting setting, one can talk not only about the
knowledge of an agent about the values of variables and how interventions
affect them, but also about knowledge update. The resulting logic can model
reasoning about thought experiments. However, it is unable to account for
learning from experiments, which is clearly brought out by the fact that it
validates the no learning principle for interventions. Therefore, in a second
step, we implement a more complex notion of knowledge that allows an agent to
observe (measure) certain variables when an experiment is carried out. This
extended system does allow for learning from experiments. For all the proposed
logical systems, we provide a sound and complete axiomatization.
Related papers
- Crystal: Introspective Reasoners Reinforced with Self-Feedback [118.53428015478957]
We propose a novel method to develop an introspective commonsense reasoner, Crystal.
To tackle commonsense problems, it first introspects for knowledge statements related to the given question, and subsequently makes an informed prediction that is grounded in the previously introspected knowledge.
Experiments show that Crystal significantly outperforms both the standard supervised finetuning and chain-of-thought distilled methods, and enhances the transparency of the commonsense reasoning process.
arXiv Detail & Related papers (2023-10-07T21:23:58Z) - Interpretable Imitation Learning with Dynamic Causal Relations [65.18456572421702]
We propose to expose captured knowledge in the form of a directed acyclic causal graph.
We also design this causal discovery process to be state-dependent, enabling it to model the dynamics in latent causal graphs.
The proposed framework is composed of three parts: a dynamic causal discovery module, a causality encoding module, and a prediction module, and is trained in an end-to-end manner.
arXiv Detail & Related papers (2023-09-30T20:59:42Z) - A Semantic Approach to Decidability in Epistemic Planning (Extended
Version) [72.77805489645604]
We use a novel semantic approach to achieve decidability.
Specifically, we augment the logic of knowledge S5$_n$ and with an interaction axiom called (knowledge) commutativity.
We prove that our framework admits a finitary non-fixpoint characterization of common knowledge, which is of independent interest.
arXiv Detail & Related papers (2023-07-28T11:26:26Z) - Understanding the Unforeseen via the Intentional Stance [0.0]
We present an architecture and system for understanding novel behaviors of an observed agent.
The two main features of our approach are the adoption of Dennett's intentional stance and analogical reasoning.
arXiv Detail & Related papers (2022-11-01T14:14:14Z) - Towards Unifying Perceptual Reasoning and Logical Reasoning [0.6853165736531939]
Recent study of logic presents a view of logical reasoning as Bayesian inference.
We show that the model unifies the two essential processes common in perceptual and logical systems.
arXiv Detail & Related papers (2022-06-27T10:32:47Z) - A Quantitative Symbolic Approach to Individual Human Reasoning [0.0]
We take findings from the literature and show how these, formalized as cognitive principles within a logical framework, can establish a quantitative notion of reasoning.
We employ techniques from non-monotonic reasoning and computer science, namely, a solving paradigm called answer set programming (ASP)
Finally, we can fruitfully use plausibility reasoning in ASP to test the effects of an existing experiment and explain different majority responses.
arXiv Detail & Related papers (2022-05-10T16:43:47Z) - Nested Counterfactual Identification from Arbitrary Surrogate
Experiments [95.48089725859298]
We study the identification of nested counterfactuals from an arbitrary combination of observations and experiments.
Specifically, we prove the counterfactual unnesting theorem (CUT), which allows one to map arbitrary nested counterfactuals to unnested ones.
arXiv Detail & Related papers (2021-07-07T12:51:04Z) - Systematic Evaluation of Causal Discovery in Visual Model Based
Reinforcement Learning [76.00395335702572]
A central goal for AI and causality is the joint discovery of abstract representations and causal structure.
Existing environments for studying causal induction are poorly suited for this objective because they have complicated task-specific causal graphs.
In this work, our goal is to facilitate research in learning representations of high-level variables as well as causal structures among them.
arXiv Detail & Related papers (2021-07-02T05:44:56Z) - To do or not to do: finding causal relations in smart homes [2.064612766965483]
This paper introduces a new way to learn causal models from a mixture of experiments on the environment and observational data.
The core of our method is the use of selected interventions, especially our learning takes into account the variables where it is impossible to intervene.
We use our method on a smart home simulation, a use case where knowing causal relations pave the way towards explainable systems.
arXiv Detail & Related papers (2021-05-20T22:36:04Z) - Causal Curiosity: RL Agents Discovering Self-supervised Experiments for
Causal Representation Learning [24.163616087447874]
We introduce em causal curiosity, a novel intrinsic reward.
We show that it allows our agents to learn optimal sequences of actions.
We also show that the knowledge of causal factor representations aids zero-shot learning for more complex tasks.
arXiv Detail & Related papers (2020-10-07T02:07:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.