Towards Understanding How Machines Can Learn Causal Overhypotheses
- URL: http://arxiv.org/abs/2206.08353v1
- Date: Thu, 16 Jun 2022 17:54:16 GMT
- Title: Towards Understanding How Machines Can Learn Causal Overhypotheses
- Authors: Eliza Kosoy, David M. Chan, Adrian Liu, Jasmine Collins, Bryanna
Kaufmann, Sandy Han Huang, Jessica B. Hamrick, John Canny, Nan Rosemary Ke,
Alison Gopnik
- Abstract summary: Children are adept at many kinds of causal inference and learning.
One of the key challenges for current machine learning algorithms is modeling and understanding causal overhypotheses.
We present a new benchmark -- a flexible environment which allows for the evaluation of existing techniques.
- Score: 4.540122114051773
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent work in machine learning and cognitive science has suggested that
understanding causal information is essential to the development of
intelligence. The extensive literature in cognitive science using the ``blicket
detector'' environment shows that children are adept at many kinds of causal
inference and learning. We propose to adapt that environment for machine
learning agents. One of the key challenges for current machine learning
algorithms is modeling and understanding causal overhypotheses: transferable
abstract hypotheses about sets of causal relationships. In contrast, even young
children spontaneously learn and use causal overhypotheses. In this work, we
present a new benchmark -- a flexible environment which allows for the
evaluation of existing techniques under variable causal overhypotheses -- and
demonstrate that many existing state-of-the-art methods have trouble
generalizing in this environment. The code and resources for this benchmark are
available at https://github.com/CannyLab/casual_overhypotheses.
Related papers
- A Review of Neuroscience-Inspired Machine Learning [58.72729525961739]
Bio-plausible credit assignment is compatible with practically any learning condition and is energy-efficient.
In this paper, we survey several vital algorithms that model bio-plausible rules of credit assignment in artificial neural networks.
We conclude by discussing the future challenges that will need to be addressed in order to make such algorithms more useful in practical applications.
arXiv Detail & Related papers (2024-02-16T18:05:09Z) - Neural Causal Abstractions [63.21695740637627]
We develop a new family of causal abstractions by clustering variables and their domains.
We show that such abstractions are learnable in practical settings through Neural Causal Models.
Our experiments support the theory and illustrate how to scale causal inferences to high-dimensional settings involving image data.
arXiv Detail & Related papers (2024-01-05T02:00:27Z) - NeuroExplainer: Fine-Grained Attention Decoding to Uncover Cortical
Development Patterns of Preterm Infants [73.85768093666582]
We propose an explainable geometric deep network dubbed NeuroExplainer.
NeuroExplainer is used to uncover altered infant cortical development patterns associated with preterm birth.
arXiv Detail & Related papers (2023-01-01T12:48:12Z) - The role of prior information and computational power in Machine
Learning [0.0]
We discuss how prior information and computational power can be employed to solve a learning problem.
We argue that employing high computational power has the advantage of a higher performance.
arXiv Detail & Related papers (2022-10-31T20:39:53Z) - Systematic Evaluation of Causal Discovery in Visual Model Based
Reinforcement Learning [76.00395335702572]
A central goal for AI and causality is the joint discovery of abstract representations and causal structure.
Existing environments for studying causal induction are poorly suited for this objective because they have complicated task-specific causal graphs.
In this work, our goal is to facilitate research in learning representations of high-level variables as well as causal structures among them.
arXiv Detail & Related papers (2021-07-02T05:44:56Z) - To do or not to do: finding causal relations in smart homes [2.064612766965483]
This paper introduces a new way to learn causal models from a mixture of experiments on the environment and observational data.
The core of our method is the use of selected interventions, especially our learning takes into account the variables where it is impossible to intervene.
We use our method on a smart home simulation, a use case where knowing causal relations pave the way towards explainable systems.
arXiv Detail & Related papers (2021-05-20T22:36:04Z) - ACRE: Abstract Causal REasoning Beyond Covariation [90.99059920286484]
We introduce the Abstract Causal REasoning dataset for systematic evaluation of current vision systems in causal induction.
Motivated by the stream of research on causal discovery in Blicket experiments, we query a visual reasoning system with the following four types of questions in either an independent scenario or an interventional scenario.
We notice that pure neural models tend towards an associative strategy under their chance-level performance, whereas neuro-symbolic combinations struggle in backward-blocking reasoning.
arXiv Detail & Related papers (2021-03-26T02:42:38Z) - KANDINSKYPatterns -- An experimental exploration environment for Pattern
Analysis and Machine Intelligence [0.0]
We present KANDINSKYPatterns, named after the Russian artist Wassily Kandinksy, who made theoretical contributions to compositivity, i.e. that all perceptions consist of geometrically elementary individual components.
KANDINSKYPatterns have computationally controllable properties on the one hand, bringing ground truth, they are also easily distinguishable by human observers, i.e., controlled patterns can be described by both humans and algorithms.
arXiv Detail & Related papers (2021-02-28T14:09:59Z) - Knowledge as Invariance -- History and Perspectives of
Knowledge-augmented Machine Learning [69.99522650448213]
Research in machine learning is at a turning point.
Research interests are shifting away from increasing the performance of highly parameterized models to exceedingly specific tasks.
This white paper provides an introduction and discussion of this emerging field in machine learning research.
arXiv Detail & Related papers (2020-12-21T15:07:19Z) - Off-the-shelf deep learning is not enough: parsimony, Bayes and
causality [0.8602553195689513]
We discuss opportunities and roadblocks to implementation of deep learning within materials science.
We argue that deep learning and AI are now well positioned to revolutionize fields where causal links are known.
arXiv Detail & Related papers (2020-05-04T15:16:30Z) - Shortcut Learning in Deep Neural Networks [29.088631285225237]
We seek to distill how many of deep learning's problems can be seen as different symptoms of the same underlying problem: shortcut learning.
Shortcuts are decision rules that perform well on standard benchmarks but fail to transfer to more challenging testing conditions, such as real-world scenarios.
We develop recommendations for model interpretation and benchmarking, highlighting recent advances in machine learning to improve robustness and transferability from the lab to real-world applications.
arXiv Detail & Related papers (2020-04-16T17:18:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.