Rejecting Cognitivism: Computational Phenomenology for Deep Learning
- URL: http://arxiv.org/abs/2302.09071v1
- Date: Thu, 16 Feb 2023 20:05:06 GMT
- Title: Rejecting Cognitivism: Computational Phenomenology for Deep Learning
- Authors: Pierre Beckmann, Guillaume K\"ostner, In\^es Hip\'olito
- Abstract summary: We propose a non-representationalist framework for deep learning relying on a novel method: computational phenomenology.
We reject the modern cognitivist interpretation of deep learning, according to which artificial neural networks encode representations of external entities.
- Score: 5.070542698701158
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a non-representationalist framework for deep learning relying on a
novel method: computational phenomenology, a dialogue between the first-person
perspective (relying on phenomenology) and the mechanisms of computational
models. We thereby reject the modern cognitivist interpretation of deep
learning, according to which artificial neural networks encode representations
of external entities. This interpretation mainly relies on
neuro-representationalism, a position that combines a strong ontological
commitment towards scientific theoretical entities and the idea that the brain
operates on symbolic representations of these entities. We proceed as follows:
after offering a review of cognitivism and neuro-representationalism in the
field of deep learning, we first elaborate a phenomenological critique of these
positions; we then sketch out computational phenomenology and distinguish it
from existing alternatives; finally we apply this new method to deep learning
models trained on specific tasks, in order to formulate a conceptual framework
of deep-learning, that allows one to think of artificial neural networks'
mechanisms in terms of lived experience.
Related papers
- Neuropsychology and Explainability of AI: A Distributional Approach to the Relationship Between Activation Similarity of Neural Categories in Synthetic Cognition [0.11235145048383502]
We propose an approach to explainability of artificial neural networks that involves using concepts from human cognitive tokens.
We show that the categorical segment created by a neuron is actually the result of a superposition of categorical sub-dimensions within its input vector space.
arXiv Detail & Related papers (2024-10-23T05:27:09Z) - Neuropsychology of AI: Relationship Between Activation Proximity and Categorical Proximity Within Neural Categories of Synthetic Cognition [0.11235145048383502]
This study focuses on synthetic neural cog nition as a new type of study object within cognitive psychology.
The goal is to make artificial neural networks of language models more explainable.
This approach involves transposing concepts from cognitive psychology to the interpretive construction of artificial neural cognition.
arXiv Detail & Related papers (2024-10-08T12:34:13Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - From Neural Activations to Concepts: A Survey on Explaining Concepts in Neural Networks [15.837316393474403]
Concepts can act as a natural link between learning and reasoning.
Knowledge can not only be extracted from neural networks but concept knowledge can also be inserted into neural network architectures.
arXiv Detail & Related papers (2023-10-18T11:08:02Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Mapping Knowledge Representations to Concepts: A Review and New
Perspectives [0.6875312133832078]
This review focuses on research that aims to associate internal representations with human understandable concepts.
We find this taxonomy and theories of causality, useful for understanding what can be expected, and not expected, from neural network explanations.
The analysis additionally uncovers an ambiguity in the reviewed literature related to the goal of model explainability.
arXiv Detail & Related papers (2022-12-31T12:56:12Z) - Interpretability of Neural Network With Physiological Mechanisms [5.1971653175509145]
Deep learning continues to play as a powerful state-of-art technique that has achieved extraordinary accuracy levels in various domains of regression and classification tasks.
The original goal of proposing the neural network model is to improve the understanding of complex human brains using a mathematical expression approach.
Recent deep learning techniques continue to lose the interpretations of its functional process by being treated mostly as a black-box approximator.
arXiv Detail & Related papers (2022-03-24T21:40:04Z) - Developing Constrained Neural Units Over Time [81.19349325749037]
This paper focuses on an alternative way of defining Neural Networks, that is different from the majority of existing approaches.
The structure of the neural architecture is defined by means of a special class of constraints that are extended also to the interaction with data.
The proposed theory is cast into the time domain, in which data are presented to the network in an ordered manner.
arXiv Detail & Related papers (2020-09-01T09:07:25Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.