Philosophy of Cognitive Science in the Age of Deep Learning
- URL: http://arxiv.org/abs/2405.04048v1
- Date: Tue, 7 May 2024 06:39:47 GMT
- Title: Philosophy of Cognitive Science in the Age of Deep Learning
- Authors: Raphaël Millière,
- Abstract summary: Deep learning has enabled major advances across most areas of artificial intelligence research.
This perspective paper surveys key areas where their contributions can be especially fruitful.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning has enabled major advances across most areas of artificial intelligence research. This remarkable progress extends beyond mere engineering achievements and holds significant relevance for the philosophy of cognitive science. Deep neural networks have made significant strides in overcoming the limitations of older connectionist models that once occupied the centre stage of philosophical debates about cognition. This development is directly relevant to long-standing theoretical debates in the philosophy of cognitive science. Furthermore, ongoing methodological challenges related to the comparative evaluation of deep neural networks stand to benefit greatly from interdisciplinary collaboration with philosophy and cognitive science. The time is ripe for philosophers to explore foundational issues related to deep learning and cognition; this perspective paper surveys key areas where their contributions can be especially fruitful.
Related papers
- Multiple Realizability and the Rise of Deep Learning [0.0]
The paper explores the implications of deep learning models for the multiple realizability thesis.
It suggests that deep neural networks may play a crucial role in formulating and evaluating hypotheses about cognition.
arXiv Detail & Related papers (2024-05-21T22:36:49Z) - AI for Mathematics: A Cognitive Science Perspective [86.02346372284292]
Mathematics is one of the most powerful conceptual systems developed and used by the human species.
Rapid progress in AI, particularly propelled by advances in large language models (LLMs), has sparked renewed, widespread interest in building such systems.
arXiv Detail & Related papers (2023-10-19T02:00:31Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - A Survey of Deep Learning for Mathematical Reasoning [71.88150173381153]
We review the key tasks, datasets, and methods at the intersection of mathematical reasoning and deep learning over the past decade.
Recent advances in large-scale neural language models have opened up new benchmarks and opportunities to use deep learning for mathematical reasoning.
arXiv Detail & Related papers (2022-12-20T18:46:16Z) - Deep Causal Learning: Representation, Discovery and Inference [2.696435860368848]
Causal learning reveals the essential relationships that underpin phenomena and delineates the mechanisms by which the world evolves.
Traditional causal learning methods face numerous challenges and limitations, including high-dimensional variables, unstructured variables, optimization problems, unobserved confounders, selection biases, and estimation inaccuracies.
Deep causal learning, which leverages deep neural networks, offers innovative insights and solutions for addressing these challenges.
arXiv Detail & Related papers (2022-11-07T09:00:33Z) - Theoretical Perspectives on Deep Learning Methods in Inverse Problems [115.93934028666845]
We focus on generative priors, untrained neural network priors, and unfolding algorithms.
In addition to summarizing existing results in these topics, we highlight several ongoing challenges and open problems.
arXiv Detail & Related papers (2022-06-29T02:37:50Z) - Deep Learning Opacity in Scientific Discovery [0.15229257192293197]
I argue that the disconnect between philosophical pessimism and scientific optimism is driven by a failure to examine how AI is actually used in science.
I show that, in order to understand the justification for AI-powered breakthroughs, philosophers must examine the role played by deep learning as part of a wider process of discovery.
arXiv Detail & Related papers (2022-06-01T14:30:49Z) - Mind the gap: Challenges of deep learning approaches to Theory of Mind [0.0]
Theory of Mind is an essential ability of humans to infer the mental states of others.
Here we provide a coherent summary of the potential, current progress, and problems of deep learning approaches to Theory of Mind.
arXiv Detail & Related papers (2022-03-30T15:48:05Z) - From Psychological Curiosity to Artificial Curiosity: Curiosity-Driven
Learning in Artificial Intelligence Tasks [56.20123080771364]
Psychological curiosity plays a significant role in human intelligence to enhance learning through exploration and information acquisition.
In the Artificial Intelligence (AI) community, artificial curiosity provides a natural intrinsic motivation for efficient learning.
CDL has become increasingly popular, where agents are self-motivated to learn novel knowledge.
arXiv Detail & Related papers (2022-01-20T17:07:03Z) - Limitations of Deep Neural Networks: a discussion of G. Marcus' critical
appraisal of deep learning [0.0]
Deep neural networks have been applied with great results in medical imaging, semi-autonomous vehicles, ecommerce, genetics research, speech recognition, particle physics, experimental art, economic forecasting, environmental science, industrial manufacturing, and a wide variety of applications in nearly every field.
This study examines some of the limitations of deep neural networks, with the intention of pointing towards potential paths for future research, and of clearing up some metaphysical misconceptions, held by numerous researchers, that may misdirect them.
arXiv Detail & Related papers (2020-12-22T12:11:19Z) - Optimism in the Face of Adversity: Understanding and Improving Deep
Learning through Adversarial Robustness [63.627760598441796]
We provide an in-depth review of the field of adversarial robustness in deep learning.
We highlight the intuitive connection between adversarial examples and the geometry of deep neural networks.
We provide an overview of the main emerging applications of adversarial robustness beyond security.
arXiv Detail & Related papers (2020-10-19T16:03:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.