Induction, Popper, and machine learning
- URL: http://arxiv.org/abs/2110.00840v1
- Date: Sat, 2 Oct 2021 16:52:28 GMT
- Title: Induction, Popper, and machine learning
- Authors: Bruce Nielson, Daniel C. Elton
- Abstract summary: We show that most AI algorithms in use today can be understood as using an evolutionary trial and error process searching over a solution space.
We argue that a universal Darwinian framework provides a better foundation for understanding AI systems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Francis Bacon popularized the idea that science is based on a process of
induction by which repeated observations are, in some unspecified way,
generalized to theories based on the assumption that the future resembles the
past. This idea was criticized by Hume and others as untenable leading to the
famous problem of induction. It wasn't until the work of Karl Popper that this
problem was solved, by demonstrating that induction is not the basis for
science and that the development of scientific knowledge is instead based on
the same principles as biological evolution. Today, machine learning is also
taught as being rooted in induction from big data. Solomonoff induction
implemented in an idealized Bayesian agent (Hutter's AIXI) is widely discussed
and touted as a framework for understanding AI algorithms, even though
real-world attempts to implement something like AIXI immediately encounter
fatal problems. In this paper, we contrast frameworks based on induction with
Donald T. Campbell's universal Darwinism. We show that most AI algorithms in
use today can be understood as using an evolutionary trial and error process
searching over a solution space. In this work we argue that a universal
Darwinian framework provides a better foundation for understanding AI systems.
Moreover, at a more meta level the process of development of all AI algorithms
can be understood under the framework of universal Darwinism.
Related papers
- Problems in AI, their roots in philosophy, and implications for science and society [0.0]
More attention should be paid to the philosophical aspects of AI technology and its use.
It is argued that this deficit is generally combined with philosophical misconceptions about the growth of knowledge.
arXiv Detail & Related papers (2024-07-22T14:38:54Z) - A Triumvirate of AI Driven Theoretical Discovery [0.0]
We argue that while the theorist is in no way in danger of being replaced by AI in the near future, the hybrid of human expertise and AI algorithms will become an integral part of theoretical discovery.
arXiv Detail & Related papers (2024-05-30T11:57:00Z) - AI for Mathematics: A Cognitive Science Perspective [86.02346372284292]
Mathematics is one of the most powerful conceptual systems developed and used by the human species.
Rapid progress in AI, particularly propelled by advances in large language models (LLMs), has sparked renewed, widespread interest in building such systems.
arXiv Detail & Related papers (2023-10-19T02:00:31Z) - Neurocompositional computing: From the Central Paradox of Cognition to a
new generation of AI systems [120.297940190903]
Recent progress in AI has resulted from the use of limited forms of neurocompositional computing.
New, deeper forms of neurocompositional computing create AI systems that are more robust, accurate, and comprehensible.
arXiv Detail & Related papers (2022-05-02T18:00:10Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Applying Deutsch's concept of good explanations to artificial
intelligence and neuroscience -- an initial exploration [0.0]
We investigate Deutsch's hard-to-vary principle and how it relates to more formalized principles in deep learning.
We look at what role hard-tovary explanations play in intelligence by looking at the human brain.
arXiv Detail & Related papers (2020-12-16T23:23:22Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Explaining AI as an Exploratory Process: The Peircean Abduction Model [0.2676349883103404]
Abductive inference has been defined in many ways.
Challenge of implementing abductive reasoning and the challenge of automating the explanation process are closely linked.
This analysis provides a theoretical framework for understanding what the XAI researchers are already doing.
arXiv Detail & Related papers (2020-09-30T17:10:37Z) - Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike
Common Sense [142.53911271465344]
We argue that the next generation of AI must embrace "dark" humanlike common sense for solving novel tasks.
We identify functionality, physics, intent, causality, and utility (FPICU) as the five core domains of cognitive AI with humanlike common sense.
arXiv Detail & Related papers (2020-04-20T04:07:28Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.