Language (Re)modelling: Towards Embodied Language Understanding
- URL: http://arxiv.org/abs/2005.00311v2
- Date: Thu, 9 Jul 2020 12:53:34 GMT
- Title: Language (Re)modelling: Towards Embodied Language Understanding
- Authors: Ronen Tamari, Chen Shani, Tom Hope, Miriam R. L. Petruck, Omri Abend,
Dafna Shahaf
- Abstract summary: This work proposes an approach to representation and learning based on the tenets of embodied cognitive linguistics (ECL)
According to ECL, natural language is inherently executable (like programming languages)
This position paper argues that the use of grounding by metaphoric inference and simulation will greatly benefit NLU systems.
- Score: 33.50428967270188
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While natural language understanding (NLU) is advancing rapidly, today's
technology differs from human-like language understanding in fundamental ways,
notably in its inferior efficiency, interpretability, and generalization. This
work proposes an approach to representation and learning based on the tenets of
embodied cognitive linguistics (ECL). According to ECL, natural language is
inherently executable (like programming languages), driven by mental simulation
and metaphoric mappings over hierarchical compositions of structures and
schemata learned through embodied interaction. This position paper argues that
the use of grounding by metaphoric inference and simulation will greatly
benefit NLU systems, and proposes a system architecture along with a roadmap
towards realizing this vision.
Related papers
- Constructive Approach to Bidirectional Causation between Qualia Structure and Language Emergence [5.906966694759679]
This paper presents a novel perspective on the bidirectional causation between language emergence and relational structure of subjective experiences.
We hypothesize that languages with distributional semantics, e.g., syntactic-semantic structures, may have emerged through the process of aligning internal representations among individuals.
arXiv Detail & Related papers (2024-09-14T11:03:12Z) - Large Language Models are Interpretable Learners [53.56735770834617]
In this paper, we show a combination of Large Language Models (LLMs) and symbolic programs can bridge the gap between expressiveness and interpretability.
The pretrained LLM with natural language prompts provides a massive set of interpretable modules that can transform raw input into natural language concepts.
As the knowledge learned by LSP is a combination of natural language descriptions and symbolic rules, it is easily transferable to humans (interpretable) and other LLMs.
arXiv Detail & Related papers (2024-06-25T02:18:15Z) - Interpretability of Language Models via Task Spaces [14.543168558734001]
We present an alternative approach to interpret language models (LMs)
We focus on the quality of LM processing, with a focus on their language abilities.
We construct 'linguistic task spaces' that shed light on the connections LMs draw between language phenomena.
arXiv Detail & Related papers (2024-06-10T16:34:30Z) - Hierarchical Text-to-Vision Self Supervised Alignment for Improved Histopathology Representation Learning [64.1316997189396]
We present a novel language-tied self-supervised learning framework, Hierarchical Language-tied Self-Supervision (HLSS) for histopathology images.
Our resulting model achieves state-of-the-art performance on two medical imaging benchmarks, OpenSRH and TCGA datasets.
arXiv Detail & Related papers (2024-03-21T17:58:56Z) - Language Evolution with Deep Learning [49.879239655532324]
Computational modeling plays an essential role in the study of language emergence.
It aims to simulate the conditions and learning processes that could trigger the emergence of a structured language.
This chapter explores another class of computational models that have recently revolutionized the field of machine learning: deep learning models.
arXiv Detail & Related papers (2024-03-18T16:52:54Z) - From Word Models to World Models: Translating from Natural Language to
the Probabilistic Language of Thought [124.40905824051079]
We propose rational meaning construction, a computational framework for language-informed thinking.
We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought.
We show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings.
We extend our framework to integrate cognitively-motivated symbolic modules.
arXiv Detail & Related papers (2023-06-22T05:14:00Z) - Embodied Concept Learner: Self-supervised Learning of Concepts and
Mapping through Instruction Following [101.55727845195969]
We propose Embodied Learner Concept (ECL) in an interactive 3D environment.
A robot agent can ground visual concepts, build semantic maps and plan actions to complete tasks.
ECL is fully transparent and step-by-step interpretable in long-term planning.
arXiv Detail & Related papers (2023-04-07T17:59:34Z) - Low-Dimensional Structure in the Space of Language Representations is
Reflected in Brain Responses [62.197912623223964]
We show a low-dimensional structure where language models and translation models smoothly interpolate between word embeddings, syntactic and semantic tasks, and future word embeddings.
We find that this representation embedding can predict how well each individual feature space maps to human brain responses to natural language stimuli recorded using fMRI.
This suggests that the embedding captures some part of the brain's natural language representation structure.
arXiv Detail & Related papers (2021-06-09T22:59:12Z) - Ecological Semantics: Programming Environments for Situated Language
Understanding [25.853707930426175]
Grounded language learning approaches offer the promise of deeper understanding by situating learning in richer, more structured training environments.
We propose treating environments as "first-class citizens" in semantic representations.
We argue that models must begin to understand and program in the language of affordances.
arXiv Detail & Related papers (2020-03-10T08:24:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.