Enriching Word Embeddings with Temporal and Spatial Information
- URL: http://arxiv.org/abs/2010.00761v1
- Date: Fri, 2 Oct 2020 03:15:03 GMT
- Title: Enriching Word Embeddings with Temporal and Spatial Information
- Authors: Hongyu Gong, Suma Bhat, Pramod Viswanath
- Abstract summary: We present a model for learning word representation conditioned on time and location.
We train our model on time- and location-stamped corpora, and show using both quantitative and qualitative evaluations that it can capture semantics across time and locations.
- Score: 37.0220769789037
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The meaning of a word is closely linked to sociocultural factors that can
change over time and location, resulting in corresponding meaning changes.
Taking a global view of words and their meanings in a widely used language,
such as English, may require us to capture more refined semantics for use in
time-specific or location-aware situations, such as the study of cultural
trends or language use. However, popular vector representations for words do
not adequately include temporal or spatial information. In this work, we
present a model for learning word representation conditioned on time and
location. In addition to capturing meaning changes over time and location, we
require that the resulting word embeddings retain salient semantic and
geometric properties. We train our model on time- and location-stamped corpora,
and show using both quantitative and qualitative evaluations that it can
capture semantics across time and locations. We note that our model compares
favorably with the state-of-the-art for time-specific embedding, and serves as
a new benchmark for location-specific embeddings.
Related papers
- Domain-specific long text classification from sparse relevant information [3.3611255314174815]
We propose a hierarchical model which exploits a short list of potential target terms to retrieve candidate sentences.
A pooling of the term(s) embedding(s) entails the document representation to be classified.
We show that our narrower hierarchical model is better than larger language models for retrieving relevant long documents in a domain-specific context.
arXiv Detail & Related papers (2024-08-23T17:54:19Z) - Survey in Characterization of Semantic Change [0.1474723404975345]
Understanding the meaning of words is vital for interpreting texts from different cultures.
Semantic changes can potentially impact the quality of the outcomes of computational linguistics algorithms.
arXiv Detail & Related papers (2024-02-29T12:13:50Z) - Unsupervised Semantic Variation Prediction using the Distribution of
Sibling Embeddings [17.803726860514193]
Detection of semantic variation of words is an important task for various NLP applications.
We argue that mean representations alone cannot accurately capture such semantic variations.
We propose a method that uses the entire cohort of the contextualised embeddings of the target word.
arXiv Detail & Related papers (2023-05-15T13:58:21Z) - Variational Cross-Graph Reasoning and Adaptive Structured Semantics
Learning for Compositional Temporal Grounding [143.5927158318524]
Temporal grounding is the task of locating a specific segment from an untrimmed video according to a query sentence.
We introduce a new Compositional Temporal Grounding task and construct two new dataset splits.
We argue that the inherent structured semantics inside the videos and language is the crucial factor to achieve compositional generalization.
arXiv Detail & Related papers (2023-01-22T08:02:23Z) - Compositional Temporal Grounding with Structured Variational Cross-Graph
Correspondence Learning [92.07643510310766]
Temporal grounding in videos aims to localize one target video segment that semantically corresponds to a given query sentence.
We introduce a new Compositional Temporal Grounding task and construct two new dataset splits.
We empirically find that they fail to generalize to queries with novel combinations of seen words.
We propose a variational cross-graph reasoning framework that explicitly decomposes video and language into multiple structured hierarchies.
arXiv Detail & Related papers (2022-03-24T12:55:23Z) - Time Masking for Temporal Language Models [23.08079115356717]
We propose a temporal contextual language model called TempoBERT, which uses time as an additional context of texts.
Our technique is based on modifying texts with temporal information and performing time masking - specific masking for the supplementary time information.
arXiv Detail & Related papers (2021-10-12T21:15:23Z) - Lexical semantic change for Ancient Greek and Latin [61.69697586178796]
Associating a word's correct meaning in its historical context is a central challenge in diachronic research.
We build on a recent computational approach to semantic change based on a dynamic Bayesian mixture model.
We provide a systematic comparison of dynamic Bayesian mixture models for semantic change with state-of-the-art embedding-based models.
arXiv Detail & Related papers (2021-01-22T12:04:08Z) - Understanding Spatial Relations through Multiple Modalities [78.07328342973611]
spatial relations between objects can either be explicit -- expressed as spatial prepositions, or implicit -- expressed by spatial verbs such as moving, walking, shifting, etc.
We introduce the task of inferring implicit and explicit spatial relations between two entities in an image.
We design a model that uses both textual and visual information to predict the spatial relations, making use of both positional and size information of objects and image embeddings.
arXiv Detail & Related papers (2020-07-19T01:35:08Z) - Cultural Cartography with Word Embeddings [0.0]
We show how word embeddings are commensurate with prevailing theories of meaning in sociology.
First, one can hold terms constant and measure how the embedding space moves around them.
Second, one can also hold the embedding space constant and see how documents or authors move relative to it.
arXiv Detail & Related papers (2020-07-09T01:58:28Z) - Local-Global Video-Text Interactions for Temporal Grounding [77.5114709695216]
This paper addresses the problem of text-to-video temporal grounding, which aims to identify the time interval in a video semantically relevant to a text query.
We tackle this problem using a novel regression-based model that learns to extract a collection of mid-level features for semantic phrases in a text query.
The proposed method effectively predicts the target time interval by exploiting contextual information from local to global.
arXiv Detail & Related papers (2020-04-16T08:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.