Applying Deutsch's concept of good explanations to artificial
intelligence and neuroscience -- an initial exploration
- URL: http://arxiv.org/abs/2012.09318v2
- Date: Thu, 24 Dec 2020 23:06:46 GMT
- Title: Applying Deutsch's concept of good explanations to artificial
intelligence and neuroscience -- an initial exploration
- Authors: Daniel C. Elton
- Abstract summary: We investigate Deutsch's hard-to-vary principle and how it relates to more formalized principles in deep learning.
We look at what role hard-tovary explanations play in intelligence by looking at the human brain.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial intelligence has made great strides since the deep learning
revolution, but AI systems still struggle to extrapolate outside of their
training data and adapt to new situations. For inspiration we look to the
domain of science, where scientists have been able to develop theories which
show remarkable ability to extrapolate and sometimes predict the existence of
phenomena which have never been observed before. According to David Deutsch,
this type of extrapolation, which he calls "reach", is due to scientific
theories being hard to vary. In this work we investigate Deutsch's hard-to-vary
principle and how it relates to more formalized principles in deep learning
such as the bias-variance trade-off and Occam's razor. We distinguish internal
variability, how much a model/theory can be varied internally while still
yielding the same predictions, with external variability, which is how much a
model must be varied to accurately predict new, out-of-distribution data. We
discuss how to measure internal variability using the size of the Rashomon set
and how to measure external variability using Kolmogorov complexity. We explore
what role hard-to-vary explanations play in intelligence by looking at the
human brain and distinguish two learning systems in the brain. The first system
operates similar to deep learning and likely underlies most of perception and
motor control while the second is a more creative system capable of generating
hard-to-vary explanations of the world. We argue that figuring out how
replicate this second system, which is capable of generating hard-to-vary
explanations, is a key challenge which needs to be solved in order to realize
artificial general intelligence. We make contact with the framework of
Popperian epistemology which rejects induction and asserts that knowledge
generation is an evolutionary process which proceeds through conjecture and
refutation.
Related papers
- A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - Grid-SD2E: A General Grid-Feedback in a System for Cognitive Learning [0.5221459608786241]
This study is inspired in part by grid cells in creating a more general and robust grid module.
We construct an interactive and self-reinforcing cognitive system together with Bayesian reasoning.
The smallest computing unit is extracted, which is analogous to a single neuron in the brain.
arXiv Detail & Related papers (2023-04-04T14:54:12Z) - Observing Interventions: A logic for thinking about experiments [62.997667081978825]
This paper makes a first step towards a logic of learning from experiments.
Crucial for our approach is the idea that the notion of an intervention can be used as a formal expression of a (real or hypothetical) experiment.
For all the proposed logical systems, we provide a sound and complete axiomatization.
arXiv Detail & Related papers (2021-11-25T09:26:45Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Scientific intuition inspired by machine learning generated hypotheses [2.294014185517203]
We shift the focus on the insights and the knowledge obtained by the machine learning models themselves.
We apply gradient boosting in decision trees to extract human interpretable insights from big data sets from chemistry and physics.
The ability to go beyond numerics opens the door to use machine learning to accelerate the discovery of conceptual understanding.
arXiv Detail & Related papers (2020-10-27T12:12:12Z) - Understanding understanding: a renormalization group inspired model of
(artificial) intelligence [0.0]
This paper is about the meaning of understanding in scientific and in artificial intelligent systems.
We give a mathematical definition of the understanding, where, contrary to the common wisdom, we define the probability space on the input set.
We show, how scientific understanding fits into this framework, and demonstrate, what is the difference between a scientific task and pattern recognition.
arXiv Detail & Related papers (2020-10-26T11:11:46Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.