Symbol grounding in computational systems: A paradox of intentions
- URL: http://arxiv.org/abs/2505.00002v1
- Date: Wed, 26 Mar 2025 18:26:34 GMT
- Title: Symbol grounding in computational systems: A paradox of intentions
- Authors: Vincent C. Müller,
- Abstract summary: The paper presents a paradoxical feature of computational systems that suggests that computationalism cannot explain symbol grounding.<n>If the mind is computing over meaningful symbols its functioning presupposes the existence of meaningful symbols in the system.<n>If the mind is computing over meaningless symbols, no intentional cognitive processes are available prior to symbol grounding.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The paper presents a paradoxical feature of computational systems that suggests that computationalism cannot explain symbol grounding. If the mind is a digital computer, as computationalism claims, then it can be computing either over meaningful symbols or over meaningless symbols. If it is computing over meaningful symbols its functioning presupposes the existence of meaningful symbols in the system, i.e. it implies semantic nativism. If the mind is computing over meaningless symbols, no intentional cognitive processes are available prior to symbol grounding. In this case, no symbol grounding could take place since any grounding presupposes intentional cognitive processes. So, whether computing in the mind is over meaningless or over meaningful symbols, computationalism implies semantic nativism.
Related papers
- A Complexity-Based Theory of Compositionality [53.025566128892066]
In AI, compositional representations can enable a powerful form of out-of-distribution generalization.<n>Here, we propose a definition, which we call representational compositionality, that accounts for and extends our intuitions about compositionality.<n>We show how it unifies disparate intuitions from across the literature in both AI and cognitive science.
arXiv Detail & Related papers (2024-10-18T18:37:27Z) - Machine learning and information theory concepts towards an AI
Mathematician [77.63761356203105]
The current state-of-the-art in artificial intelligence is impressive, especially in terms of mastery of language, but not so much in terms of mathematical reasoning.
This essay builds on the idea that current deep learning mostly succeeds at system 1 abilities.
It takes an information-theoretical posture to ask questions about what constitutes an interesting mathematical statement.
arXiv Detail & Related papers (2024-03-07T15:12:06Z) - Symbol-LLM: Leverage Language Models for Symbolic System in Visual Human
Activity Reasoning [58.5857133154749]
We propose a new symbolic system with broad-coverage symbols and rational rules.
We leverage the recent advancement of LLMs as an approximation of the two ideal properties.
Our method shows superiority in extensive activity understanding tasks.
arXiv Detail & Related papers (2023-11-29T05:27:14Z) - Discrete, compositional, and symbolic representations through attractor dynamics [51.20712945239422]
We introduce a novel neural systems model that integrates attractor dynamics with symbolic representations to model cognitive processes akin to the probabilistic language of thought (PLoT)
Our model segments the continuous representational space into discrete basins, with attractor states corresponding to symbolic sequences, that reflect the semanticity and compositionality characteristic of symbolic systems through unsupervised learning, rather than relying on pre-defined primitives.
This approach establishes a unified framework that integrates both symbolic and sub-symbolic processing through neural dynamics, a neuroplausible substrate with proven expressivity in AI, offering a more comprehensive model that mirrors the complex duality of cognitive operations
arXiv Detail & Related papers (2023-10-03T05:40:56Z) - The Roles of Symbols in Neural-based AI: They are Not What You Think! [25.450989579215708]
We present a novel neuro-symbolic hypothesis and a plausible architecture for intelligent agents.
Our hypothesis and associated architecture imply that symbols will remain critical to the future of intelligent systems.
arXiv Detail & Related papers (2023-04-26T15:33:41Z) - Deep Symbolic Learning: Discovering Symbols and Rules from Perceptions [69.40242990198]
Neuro-Symbolic (NeSy) integration combines symbolic reasoning with Neural Networks (NNs) for tasks requiring perception and reasoning.
Most NeSy systems rely on continuous relaxation of logical knowledge, and no discrete decisions are made within the model pipeline.
We propose a NeSy system that learns NeSy-functions, i.e., the composition of a (set of) perception functions which map continuous data to discrete symbols, and a symbolic function over the set of symbols.
arXiv Detail & Related papers (2022-08-24T14:06:55Z) - Existence and perception as the basis of AGI (Artificial General
Intelligence) [0.0]
AGI, unlike AI, should operate with meanings. And that's what distinguishes it from AI.
For AGI, which emulates human thinking, this ability is crucial.
Numerous attempts to define the concept of "meaning" have one very significant drawback - all such definitions are not strict and formalized, so they cannot be programmed.
arXiv Detail & Related papers (2022-01-30T14:06:43Z) - Emergence of Machine Language: Towards Symbolic Intelligence with Neural
Networks [73.94290462239061]
We propose to combine symbolism and connectionism principles by using neural networks to derive a discrete representation.
By designing an interactive environment and task, we demonstrated that machines could generate a spontaneous, flexible, and semantic language.
arXiv Detail & Related papers (2022-01-14T14:54:58Z) - Philosophical Specification of Empathetic Ethical Artificial
Intelligence [0.0]
An ethical AI must be capable of inferring unspoken rules, interpreting nuance and context, and infer intent.
We use enactivism, semiotics, perceptual symbol systems and symbol emergence to specify an agent.
It has malleable intent because the meaning of symbols changes as it learns, and its intent is represented symbolically as a goal.
arXiv Detail & Related papers (2021-07-22T14:37:46Z) - Representation in Dynamical Systems [0.0]
The brain is often called a computer and likened to a Turing machine.
This paper argues that it can, although not in the way a digital computer does.
arXiv Detail & Related papers (2021-05-12T15:03:03Z) - Symbolic Behaviour in Artificial Intelligence [8.849576130278157]
We argue that the path towards symbolically fluent AI begins with a reinterpretation of what symbols are.
We then outline how this interpretation unifies the behavioural traits humans exhibit when they use symbols.
We suggest that AI research explore social and cultural engagement as a tool to develop the cognitive machinery necessary for symbolic behaviour to emerge.
arXiv Detail & Related papers (2021-02-05T20:07:14Z) - Fundamentals of Semantic Numeration Systems. Can the Context be
Calculated? [91.3755431537592]
This work is the first to propose the concept of a semantic numeration system (SNS) as a certain class of context-based numeration methods.
The development of the SNS concept required the introduction of fundamentally new concepts.
arXiv Detail & Related papers (2021-01-30T21:54:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.