Talking Space: inference from spatial linguistic meanings
- URL: http://arxiv.org/abs/2109.06554v2
- Date: Thu, 16 Sep 2021 16:09:04 GMT
- Title: Talking Space: inference from spatial linguistic meanings
- Authors: Vincent Wang-Mascianica and Bob Coecke
- Abstract summary: This paper concerns the intersection of natural language and the physical space around us in which we live.
We propose a mechanism for how space and linguistic structure can be made to interact in a matching compositional fashion.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper concerns the intersection of natural language and the physical
space around us in which we live, that we observe and/or imagine things within.
Many important features of language have spatial connotations, for example,
many prepositions (like in, next to, after, on, etc.) are fundamentally
spatial. Space is also a key factor of the meanings of many
words/phrases/sentences/text, and space is a, if not the key, context for
referencing (e.g. pointing) and embodiment.
We propose a mechanism for how space and linguistic structure can be made to
interact in a matching compositional fashion. Examples include Cartesian space,
subway stations, chesspieces on a chess-board, and Penrose's staircase. The
starting point for our construction is the DisCoCat model of compositional
natural language meaning, which we relax to accommodate physical space. We
address the issue of having multiple agents/objects in a space, including the
case that each agent has different capabilities with respect to that space,
e.g., the specific moves each chesspiece can make, or the different velocities
one may be able to reach.
Once our model is in place, we show how inferences drawing from the structure
of physical space can be made. We also how how linguistic model of space can
interact with other such models related to our senses and/or embodiment, such
as the conceptual spaces of colour, taste and smell, resulting in a rich
compositional model of meaning that is close to human experience and embodiment
in the world.
Related papers
- Perceptual Structure in the Absence of Grounding for LLMs: The Impact of
Abstractedness and Subjectivity in Color Language [2.6094835036012864]
We show that there is considerable alignment between a defined color space and the feature space defined by a language model.
Our results show that while color space alignment holds for monolexemic, highly pragmatic color descriptions, this alignment drops considerably in the presence of examples that exhibit elements of real linguistic usage.
arXiv Detail & Related papers (2023-11-22T02:12:36Z) - A Geometric Notion of Causal Probing [91.14470073637236]
In a language model's representation space, all information about a concept such as verbal number is encoded in a linear subspace.
We give a set of intrinsic criteria which characterize an ideal linear concept subspace.
We find that LEACE returns a one-dimensional subspace containing roughly half of total concept information.
arXiv Detail & Related papers (2023-07-27T17:57:57Z) - Grounding Characters and Places in Narrative Texts [5.254909030032427]
We propose a new spatial relationship categorization task.
The objective of the task is to assign a spatial relationship category for every character and location co-mention within a window of text.
We train a model using contextual embeddings as features to predict these relationships.
arXiv Detail & Related papers (2023-05-27T19:31:41Z) - Things not Written in Text: Exploring Spatial Commonsense from Visual
Signals [77.46233234061758]
We investigate whether models with visual signals learn more spatial commonsense than text-based models.
We propose a benchmark that focuses on the relative scales of objects, and the positional relationship between people and objects under different actions.
We find that image synthesis models are more capable of learning accurate and consistent spatial knowledge than other models.
arXiv Detail & Related papers (2022-03-15T17:02:30Z) - Natural Language and Spatial Rules [78.20667552233989]
We develop a system that formally represents spatial semantics concepts within natural language descriptions of spatial arrangements.
We combine our system with the shape grammar formalism that uses shape rules to generate languages (sets) of two-dimensional shapes.
We present various types of natural language descriptions of shapes that are successfully parsed by our system and we discuss open questions and challenges we see at the interface of language and perception.
arXiv Detail & Related papers (2021-11-28T07:18:11Z) - Low-Dimensional Structure in the Space of Language Representations is
Reflected in Brain Responses [62.197912623223964]
We show a low-dimensional structure where language models and translation models smoothly interpolate between word embeddings, syntactic and semantic tasks, and future word embeddings.
We find that this representation embedding can predict how well each individual feature space maps to human brain responses to natural language stimuli recorded using fMRI.
This suggests that the embedding captures some part of the brain's natural language representation structure.
arXiv Detail & Related papers (2021-06-09T22:59:12Z) - From Spatial Relations to Spatial Configurations [64.21025426604274]
spatial relation language is able to represent a large, comprehensive set of spatial concepts crucial for reasoning.
We show how we extend the capabilities of existing spatial representation languages with the fine-grained decomposition of semantics.
arXiv Detail & Related papers (2020-07-19T02:11:53Z) - Understanding Spatial Relations through Multiple Modalities [78.07328342973611]
spatial relations between objects can either be explicit -- expressed as spatial prepositions, or implicit -- expressed by spatial verbs such as moving, walking, shifting, etc.
We introduce the task of inferring implicit and explicit spatial relations between two entities in an image.
We design a model that uses both textual and visual information to predict the spatial relations, making use of both positional and size information of objects and image embeddings.
arXiv Detail & Related papers (2020-07-19T01:35:08Z) - Cultural Cartography with Word Embeddings [0.0]
We show how word embeddings are commensurate with prevailing theories of meaning in sociology.
First, one can hold terms constant and measure how the embedding space moves around them.
Second, one can also hold the embedding space constant and see how documents or authors move relative to it.
arXiv Detail & Related papers (2020-07-09T01:58:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.