What does it mean to represent? Mental representations as falsifiable
memory patterns
- URL: http://arxiv.org/abs/2203.02956v1
- Date: Sun, 6 Mar 2022 12:52:42 GMT
- Title: What does it mean to represent? Mental representations as falsifiable
memory patterns
- Authors: Eloy Parra-Barrero and Yulia Sandamirskaya
- Abstract summary: We argue that causal and teleological approaches fail to provide a satisfactory account of representation.
We sketch an alternative according to which representations correspond to inferred latent structures in the world.
These structures are assumed to have certain properties objectively, which allows for planning, prediction, and detection of unexpected events.
- Score: 8.430851504111585
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Representation is a key notion in neuroscience and artificial intelligence
(AI). However, a longstanding philosophical debate highlights that specifying
what counts as representation is trickier than it seems. With this brief
opinion paper we would like to bring the philosophical problem of
representation into attention and provide an implementable solution. We note
that causal and teleological approaches often assumed by neuroscientists and
engineers fail to provide a satisfactory account of representation. We sketch
an alternative according to which representations correspond to inferred latent
structures in the world, identified on the basis of conditional patterns of
activation. These structures are assumed to have certain properties
objectively, which allows for planning, prediction, and detection of unexpected
events. We illustrate our proposal with the simulation of a simple neural
network model. We believe this stronger notion of representation could inform
future research in neuroscience and AI.
Related papers
- Neuro-Symbolic AI: Explainability, Challenges, and Future Trends [26.656105779121308]
This article proposes a classification for explainability by considering both model design and behavior of 191 studies from 2013.
We classify them into five categories by considering whether the form of bridging the representation differences is readable.
We put forward suggestions for future research in three aspects: unified representations, enhancing model explainability, ethical considerations, and social impact.
arXiv Detail & Related papers (2024-11-07T02:54:35Z) - Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
We introduce Artificial Kuramotoy Neurons (AKOrN) as a dynamical alternative to threshold units.
We show that this idea provides performance improvements across a wide spectrum of tasks.
We believe that these empirical results show the importance of our assumptions at the most basic neuronal level of neural representation.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Minding Language Models' (Lack of) Theory of Mind: A Plug-and-Play
Multi-Character Belief Tracker [72.09076317574238]
ToM is a plug-and-play approach to investigate the belief states of characters in reading comprehension.
We show that ToM enhances off-the-shelf neural network theory mind in a zero-order setting while showing robust out-of-distribution performance compared to supervised baselines.
arXiv Detail & Related papers (2023-06-01T17:24:35Z) - Efficient Symbolic Reasoning for Neural-Network Verification [48.384446430284676]
We present a novel program reasoning framework for neural-network verification.
The key components of our framework are the use of the symbolic domain and the quadratic relation.
We believe that our framework can bring new theoretical insights and practical tools to verification problems for neural networks.
arXiv Detail & Related papers (2023-03-23T18:08:11Z) - A Categorical Framework of General Intelligence [12.134564449202708]
Since Alan Turing asked this question in 1950, nobody is able to give a direct answer.
We introduce a categorical framework towards this goal, with two main results.
arXiv Detail & Related papers (2023-03-08T13:37:01Z) - Emergence of Machine Language: Towards Symbolic Intelligence with Neural
Networks [73.94290462239061]
We propose to combine symbolism and connectionism principles by using neural networks to derive a discrete representation.
By designing an interactive environment and task, we demonstrated that machines could generate a spontaneous, flexible, and semantic language.
arXiv Detail & Related papers (2022-01-14T14:54:58Z) - Explanatory models in neuroscience: Part 1 -- taking mechanistic
abstraction seriously [8.477619837043214]
Critics worry that neural network models fail to illuminate brain function.
We argue that certain kinds of neural network models are actually good examples of mechanistic models.
arXiv Detail & Related papers (2021-04-03T22:17:40Z) - Abstract Spatial-Temporal Reasoning via Probabilistic Abduction and
Execution [97.50813120600026]
Spatial-temporal reasoning is a challenging task in Artificial Intelligence (AI)
Recent works have focused on an abstract reasoning task of this kind -- Raven's Progressive Matrices ( RPM)
We propose a neuro-symbolic Probabilistic Abduction and Execution learner (PrAE) learner.
arXiv Detail & Related papers (2021-03-26T02:42:18Z) - Compositional Explanations of Neurons [52.71742655312625]
We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts.
We use this procedure to answer several questions on interpretability in models for vision and natural language processing.
arXiv Detail & Related papers (2020-06-24T20:37:05Z) - On the Road with 16 Neurons: Mental Imagery with Bio-inspired Deep
Neural Networks [4.888591558726117]
We propose a strategy for visual prediction in the context of autonomous driving.
We take inspiration from two theoretical ideas about the human mind and its neural organization.
We learn compact representations that use as few as 16 neural units for each of the two basic driving concepts we consider.
arXiv Detail & Related papers (2020-03-09T16:46:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.