Language, Environment, and Robotic Navigation
- URL: http://arxiv.org/abs/2404.03049v1
- Date: Wed, 3 Apr 2024 20:30:38 GMT
- Title: Language, Environment, and Robotic Navigation
- Authors: Johnathan E. Avery,
- Abstract summary: We propose a unified framework where language functions as an abstract communicative system and as a grounded representation of perceptual experiences.
Our review of cognitive models of distributional semantics and their application to autonomous agents underscores the transformative potential of language-integrated systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: This paper explores the integration of linguistic inputs within robotic navigation systems, drawing upon the symbol interdependency hypothesis to bridge the divide between symbolic and embodied cognition. It examines previous work incorporating language and semantics into Neural Network (NN) and Simultaneous Localization and Mapping (SLAM) approaches, highlighting how these integrations have advanced the field. By contrasting abstract symbol manipulation with sensory-motor grounding, we propose a unified framework where language functions both as an abstract communicative system and as a grounded representation of perceptual experiences. Our review of cognitive models of distributional semantics and their application to autonomous agents underscores the transformative potential of language-integrated systems.
Related papers
- VisualPredicator: Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning [86.59849798539312]
We present Neuro-Symbolic Predicates, a first-order abstraction language that combines the strengths of symbolic and neural knowledge representations.
We show that our approach offers better sample complexity, stronger out-of-distribution generalization, and improved interpretability.
arXiv Detail & Related papers (2024-10-30T16:11:05Z) - Latent Space Alignment for Semantic Channel Equalization [3.266331042379877]
We relax the constraint of a shared language between agents in a semantic and goal-oriented communication system.
We propose a mathematical framework, which provides a modelling and a measure of the semantic distortion introduced in the communication when agents use distinct languages.
arXiv Detail & Related papers (2024-05-22T10:12:32Z) - Discrete, compositional, and symbolic representations through attractor dynamics [51.20712945239422]
We introduce a novel neural systems model that integrates attractor dynamics with symbolic representations to model cognitive processes akin to the probabilistic language of thought (PLoT)
Our model segments the continuous representational space into discrete basins, with attractor states corresponding to symbolic sequences, that reflect the semanticity and compositionality characteristic of symbolic systems through unsupervised learning, rather than relying on pre-defined primitives.
This approach establishes a unified framework that integrates both symbolic and sub-symbolic processing through neural dynamics, a neuroplausible substrate with proven expressivity in AI, offering a more comprehensive model that mirrors the complex duality of cognitive operations
arXiv Detail & Related papers (2023-10-03T05:40:56Z) - Interactive Natural Language Processing [67.87925315773924]
Interactive Natural Language Processing (iNLP) has emerged as a novel paradigm within the field of NLP.
This paper offers a comprehensive survey of iNLP, starting by proposing a unified definition and framework of the concept.
arXiv Detail & Related papers (2023-05-22T17:18:29Z) - Models of symbol emergence in communication: a conceptual review and a
guide for avoiding local minima [0.0]
Computational simulations are a popular method for testing hypotheses about the emergence of communication.
We identify the assumptions and explanatory targets of several most representative models and summarise the known results.
In line with this perspective, we sketch the road towards modelling the emergence of meaningful symbolic communication.
arXiv Detail & Related papers (2023-03-08T12:53:03Z) - Imitation Learning-based Implicit Semantic-aware Communication Networks:
Multi-layer Representation and Collaborative Reasoning [68.63380306259742]
Despite its promising potential, semantic communications and semantic-aware networking are still at their infancy.
We propose a novel reasoning-based implicit semantic-aware communication network architecture that allows multiple tiers of CDC and edge servers to collaborate.
We introduce a new multi-layer representation of semantic information taking into consideration both the hierarchical structure of implicit semantics as well as the personalized inference preference of individual users.
arXiv Detail & Related papers (2022-10-28T13:26:08Z) - Emergence of Shared Sensory-motor Graphical Language from Visual Input [22.23299485364174]
We introduce the Graphical Referential Game (GREG) where a speaker must produce a graphical utterance to name a visual referent object.
The utterances are drawing images produced using dynamical motor primitives combined with a sketching library.
We show that our method allows the emergence of a shared, graphical language with compositional properties.
arXiv Detail & Related papers (2022-10-03T17:11:18Z) - Compositional Generalization in Grounded Language Learning via Induced
Model Sparsity [81.38804205212425]
We consider simple language-conditioned navigation problems in a grid world environment with disentangled observations.
We design an agent that encourages sparse correlations between words in the instruction and attributes of objects, composing them together to find the goal.
Our agent maintains a high level of performance on goals containing novel combinations of properties even when learning from a handful of demonstrations.
arXiv Detail & Related papers (2022-07-06T08:46:27Z) - Emergence of Machine Language: Towards Symbolic Intelligence with Neural
Networks [73.94290462239061]
We propose to combine symbolism and connectionism principles by using neural networks to derive a discrete representation.
By designing an interactive environment and task, we demonstrated that machines could generate a spontaneous, flexible, and semantic language.
arXiv Detail & Related papers (2022-01-14T14:54:58Z) - Explainable Semantic Space by Grounding Language to Vision with
Cross-Modal Contrastive Learning [3.441021278275805]
We design a two-stream model for grounding language learning in vision.
The model first learns to align visual and language representations with the MS COCO dataset.
After training, the language stream of this model is a stand-alone language model capable of embedding concepts in a visually grounded semantic space.
arXiv Detail & Related papers (2021-11-13T19:54:15Z) - Crossmodal Language Grounding in an Embodied Neurocognitive Model [28.461246169379685]
Human infants are able to acquire natural language seemingly easily at an early age.
From a neuroscientific perspective, natural language is embodied, grounded in most, if not all, sensory and sensorimotor modalities.
We present a neurocognitive model for language grounding which reflects bio-inspired mechanisms.
arXiv Detail & Related papers (2020-06-24T08:12:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.