Quantum Contextuality for Contextual Word Embeddings
- URL: http://arxiv.org/abs/2504.13824v1
- Date: Fri, 18 Apr 2025 17:53:48 GMT
- Title: Quantum Contextuality for Contextual Word Embeddings
- Authors: Karl Svozil,
- Abstract summary: We propose an alternative framework utilizing quantum contextuality.<n>Words are encoded as single, static vectors within a Hilbert space.<n>A word vector acquires its specific semantic meaning based on the basis (context) it occupies.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventional word-to-vector embeddings face challenges in representing polysemy, where word meaning is context-dependent. While dynamic embeddings address this, we propose an alternative framework utilizing quantum contextuality. In this approach, words are encoded as single, static vectors within a Hilbert space. Language contexts are formalized as maximal observables, mathematically equivalent to orthonormal bases. A word vector acquires its specific semantic meaning based on the basis (context) it occupies, leveraging the quantum concept of intertwining contexts where a single vector can belong to multiple, mutually complementary bases. This method allows meaning to be constructed through orthogonality relationships inherent in the contextual structure, potentially offering a novel way to statically encode contextual semantics.
Related papers
- Domain Embeddings for Generating Complex Descriptions of Concepts in
Italian Language [65.268245109828]
We propose a Distributional Semantic resource enriched with linguistic and lexical information extracted from electronic dictionaries.
The resource comprises 21 domain-specific matrices, one comprehensive matrix, and a Graphical User Interface.
Our model facilitates the generation of reasoned semantic descriptions of concepts by selecting matrices directly associated with concrete conceptual knowledge.
arXiv Detail & Related papers (2024-02-26T15:04:35Z) - Multi-Relational Hyperbolic Word Embeddings from Natural Language
Definitions [5.763375492057694]
This paper presents a multi-relational model that explicitly leverages such a structure to derive word embeddings from definitions.
An empirical analysis demonstrates that the framework can help imposing the desired structural constraints.
Experiments reveal the superiority of the Hyperbolic word embeddings over the Euclidean counterparts.
arXiv Detail & Related papers (2023-05-12T08:16:06Z) - Linear Spaces of Meanings: Compositional Structures in Vision-Language
Models [110.00434385712786]
We investigate compositional structures in data embeddings from pre-trained vision-language models (VLMs)
We first present a framework for understanding compositional structures from a geometric perspective.
We then explain what these structures entail probabilistically in the case of VLM embeddings, providing intuitions for why they arise in practice.
arXiv Detail & Related papers (2023-02-28T08:11:56Z) - SensePOLAR: Word sense aware interpretability for pre-trained contextual
word embeddings [4.479834103607384]
Adding interpretability to word embeddings represents an area of active research in text representation.
We present SensePOLAR, an extension of the original POLAR framework that enables word-sense aware interpretability for pre-trained contextual word embeddings.
arXiv Detail & Related papers (2023-01-11T20:25:53Z) - Lost in Context? On the Sense-wise Variance of Contextualized Word
Embeddings [11.475144702935568]
We quantify how much the contextualized embeddings of each word sense vary across contexts in typical pre-trained models.
We find that word representations are position-biased, where the first words in different contexts tend to be more similar.
arXiv Detail & Related papers (2022-08-20T12:27:25Z) - Compositional Temporal Grounding with Structured Variational Cross-Graph
Correspondence Learning [92.07643510310766]
Temporal grounding in videos aims to localize one target video segment that semantically corresponds to a given query sentence.
We introduce a new Compositional Temporal Grounding task and construct two new dataset splits.
We empirically find that they fail to generalize to queries with novel combinations of seen words.
We propose a variational cross-graph reasoning framework that explicitly decomposes video and language into multiple structured hierarchies.
arXiv Detail & Related papers (2022-03-24T12:55:23Z) - On the Quantum-like Contextuality of Ambiguous Phrases [2.6381163133447836]
We show that meaning combinations in ambiguous phrases can be modelled in the sheaf-theoretic framework for quantum contextuality.
Using the framework of Contextuality-by-Default (CbD), we explore the probabilistic variants of these and show that CbD-contextuality is also possible.
arXiv Detail & Related papers (2021-07-19T13:23:42Z) - SemGloVe: Semantic Co-occurrences for GloVe from BERT [55.420035541274444]
GloVe learns word embeddings by leveraging statistical information from word co-occurrence matrices.
We propose SemGloVe, which distills semantic co-occurrences from BERT into static GloVe word embeddings.
arXiv Detail & Related papers (2020-12-30T15:38:26Z) - Topology of Word Embeddings: Singularities Reflect Polysemy [68.8204255655161]
We introduce a topological measure of polysemy based on persistent homology that correlates well with the actual number of meanings of a word.
We propose a simple, topologically motivated solution to the SemEval-2010 task on Word Sense Induction & Disambiguation.
arXiv Detail & Related papers (2020-11-18T17:21:51Z) - Dynamic Contextualized Word Embeddings [20.81930455526026]
We introduce dynamic contextualized word embeddings that represent words as a function of both linguistic and extralinguistic context.
Based on a pretrained language model (PLM), dynamic contextualized word embeddings model time and social space jointly.
We highlight potential application scenarios by means of qualitative and quantitative analyses on four English datasets.
arXiv Detail & Related papers (2020-10-23T22:02:40Z) - Unsupervised Distillation of Syntactic Information from Contextualized
Word Representations [62.230491683411536]
We tackle the task of unsupervised disentanglement between semantics and structure in neural language representations.
To this end, we automatically generate groups of sentences which are structurally similar but semantically different.
We demonstrate that our transformation clusters vectors in space by structural properties, rather than by lexical semantics.
arXiv Detail & Related papers (2020-10-11T15:13:18Z) - Context-theoretic Semantics for Natural Language: an Algebraic Framework [0.0]
We present a framework for natural language semantics in which words, phrases and sentences are all represented as vectors.
We show that the vector representations of words can be considered as elements of an algebra over a field.
arXiv Detail & Related papers (2020-09-22T13:31:37Z) - Word Rotator's Distance [50.67809662270474]
Key principle in assessing textual similarity is measuring the degree of semantic overlap between two texts by considering the word alignment.
We show that the norm of word vectors is a good proxy for word importance, and their angle is a good proxy for word similarity.
We propose a method that first decouples word vectors into their norm and direction, and then computes alignment-based similarity.
arXiv Detail & Related papers (2020-04-30T17:48:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.