Multiple Realizability and the Rise of Deep Learning
- URL: http://arxiv.org/abs/2405.13231v1
- Date: Tue, 21 May 2024 22:36:49 GMT
- Title: Multiple Realizability and the Rise of Deep Learning
- Authors: Sam Whitman McGrath, Jacob Russin,
- Abstract summary: The paper explores the implications of deep learning models for the multiple realizability thesis.
It suggests that deep neural networks may play a crucial role in formulating and evaluating hypotheses about cognition.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The multiple realizability thesis holds that psychological states may be implemented in a diversity of physical systems. The deep learning revolution seems to be bringing this possibility to life, offering the most plausible examples of man-made realizations of sophisticated cognitive functions to date. This paper explores the implications of deep learning models for the multiple realizability thesis. Among other things, it challenges the widely held view that multiple realizability entails that the study of the mind can and must be pursued independently of the study of its implementation in the brain or in artificial analogues. Although its central contribution is philosophical, the paper has substantial methodological upshots for contemporary cognitive science, suggesting that deep neural networks may play a crucial role in formulating and evaluating hypotheses about cognition, even if they are interpreted as implementation-level models. In the age of deep learning, multiple realizability possesses a renewed significance.
Related papers
- Theories of synaptic memory consolidation and intelligent plasticity for continual learning [7.573586022424398]
synaptic plasticity mechanisms must maintain and evolve an internal state.
plasticity algorithms must leverage the internal state to intelligently regulate plasticity at individual synapses.
arXiv Detail & Related papers (2024-05-27T08:13:39Z) - Philosophy of Cognitive Science in the Age of Deep Learning [0.0]
Deep learning has enabled major advances across most areas of artificial intelligence research.
This perspective paper surveys key areas where their contributions can be especially fruitful.
arXiv Detail & Related papers (2024-05-07T06:39:47Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Visual cognition in multimodal large language models [12.603212933816206]
Recent advancements have rekindled interest in the potential to emulate human-like cognitive abilities.
This paper evaluates the current state of vision-based large language models in the domains of intuitive physics, causal reasoning, and intuitive psychology.
arXiv Detail & Related papers (2023-11-27T18:58:34Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Behavior quantification as the missing link between fields: Tools for
digital psychiatry and their role in the future of neurobiology [0.0]
Current technologies are an exciting opportunity to improve behavioral characterization.
New capabilities for continuous collection of passive sensor streams, such as phone GPS or smartwatch accelerometers, open avenues of novel questioning.
There is huge potential in what can theoretically be captured by current technologies, but this in itself presents a large computational challenge.
arXiv Detail & Related papers (2023-05-24T17:45:10Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - Rejecting Cognitivism: Computational Phenomenology for Deep Learning [5.070542698701158]
We propose a non-representationalist framework for deep learning relying on a novel method: computational phenomenology.
We reject the modern cognitivist interpretation of deep learning, according to which artificial neural networks encode representations of external entities.
arXiv Detail & Related papers (2023-02-16T20:05:06Z) - Multimodal foundation models are better simulators of the human brain [65.10501322822881]
We present a newly-designed multimodal foundation model pre-trained on 15 million image-text pairs.
We find that both visual and lingual encoders trained multimodally are more brain-like compared with unimodal ones.
arXiv Detail & Related papers (2022-08-17T12:36:26Z) - Causal Reasoning Meets Visual Representation Learning: A Prospective
Study [117.08431221482638]
Lack of interpretability, robustness, and out-of-distribution generalization are becoming the challenges of the existing visual models.
Inspired by the strong inference ability of human-level agents, recent years have witnessed great effort in developing causal reasoning paradigms.
This paper aims to provide a comprehensive overview of this emerging field, attract attention, encourage discussions, bring to the forefront the urgency of developing novel causal reasoning methods.
arXiv Detail & Related papers (2022-04-26T02:22:28Z) - From Psychological Curiosity to Artificial Curiosity: Curiosity-Driven
Learning in Artificial Intelligence Tasks [56.20123080771364]
Psychological curiosity plays a significant role in human intelligence to enhance learning through exploration and information acquisition.
In the Artificial Intelligence (AI) community, artificial curiosity provides a natural intrinsic motivation for efficient learning.
CDL has become increasingly popular, where agents are self-motivated to learn novel knowledge.
arXiv Detail & Related papers (2022-01-20T17:07:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.