k-Contextuality as a Heuristic for Memory Separations in Learning
- URL: http://arxiv.org/abs/2507.11604v1
- Date: Tue, 15 Jul 2025 18:00:00 GMT
- Title: k-Contextuality as a Heuristic for Memory Separations in Learning
- Authors: Mariesa H. Teo, Willers Yang, James Sud, Teague Tomesh, Frederic T. Chong, Eric R. Anschuetz,
- Abstract summary: We define a new quantifier of contextuality we call strong k-contextuality.<n>This correlation measure does not induce a similar resource lower bound for quantum generative models.<n>Strong k-contextuality emerges as a measure to help identify problems that are difficult for classical computers, but may not be for quantum computers.
- Score: 1.9827715138685622
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Classical machine learning models struggle with learning and prediction tasks on data sets exhibiting long-range correlations. Previously, the existence of a long-range correlational structure known as contextuality was shown to inhibit efficient classical machine learning representations of certain quantum-inspired sequential distributions. Here, we define a new quantifier of contextuality we call strong k-contextuality, and prove that any translation task exhibiting strong k-contextuality is unable to be represented to finite relative entropy by a classical streaming model with fewer than k latent states. Importantly, this correlation measure does not induce a similar resource lower bound for quantum generative models. Using this theory as motivation, we develop efficient algorithms which estimate our new measure of contextuality in sequential data, and empirically show that this estimate is a good predictor for the difference in performance of resource-constrained classical and quantum Bayesian networks in modeling the data. Strong k-contextuality thus emerges as a measure to help identify problems that are difficult for classical computers, but may not be for quantum computers.
Related papers
- Quantum Algorithms for Causal Estimands [0.0]
Causal machine learning aims to solve issues by estimating the expected outcome of counterfactual events.<n>It is an open question as to whether these causal algorithms provide opportunities for quantum enhancement.
arXiv Detail & Related papers (2025-05-19T08:58:09Z) - The Foundations of Tokenization: Statistical and Computational Concerns [51.370165245628975]
Tokenization is a critical step in the NLP pipeline.<n>Despite its recognized importance as a standard representation method in NLP, the theoretical underpinnings of tokenization are not yet fully understood.<n>The present paper contributes to addressing this theoretical gap by proposing a unified formal framework for representing and analyzing tokenizer models.
arXiv Detail & Related papers (2024-07-16T11:12:28Z) - Arbitrary Polynomial Separations in Trainable Quantum Machine Learning [0.8532753451809455]
Recent theoretical results in quantum machine learning have demonstrated a general trade-off between the expressive power of quantum neural networks (QNNs) and their trainability.<n>We show that contextuality is the source of the expressivity separation, suggesting that other learning tasks with this property may be a natural setting for the use of quantum learning algorithms.
arXiv Detail & Related papers (2024-02-13T17:12:01Z) - Exploring the Limits of Historical Information for Temporal Knowledge
Graph Extrapolation [59.417443739208146]
We propose a new event forecasting model based on a novel training framework of historical contrastive learning.
CENET learns both the historical and non-historical dependency to distinguish the most potential entities.
We evaluate our proposed model on five benchmark graphs.
arXiv Detail & Related papers (2023-08-29T03:26:38Z) - Exponential separations between classical and quantum learners [2.209921757303168]
We discuss how subtle differences in definitions can result in significantly different requirements and tasks for the learner to meet and solve.
We present two new learning separations where the classical difficulty primarily lies in identifying the function generating the data.
arXiv Detail & Related papers (2023-06-28T08:55:56Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Contextuality and inductive bias in quantum machine learning [0.0]
Generalisation in machine learning often relies on the ability to encode structures present in data into an inductive bias of the model class.
We look at quantum contextuality -- a form of nonclassicality with links to computational advantage.
We show how to construct quantum learning models with the associated inductive bias, and show through our toy problem that they outperform their corresponding classical surrogate models.
arXiv Detail & Related papers (2023-02-02T19:07:26Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Interpretable Quantum Advantage in Neural Sequence Learning [2.575030923243061]
We study the relative expressive power between a broad class of neural network sequence models and a class of recurrent models based on Gaussian operations with non-Gaussian measurements.
We pinpoint quantum contextuality as the source of an unconditional memory separation in the expressivity of the two model classes.
In doing so, we demonstrate that our introduced quantum models are able to outperform state of the art classical models even in practice.
arXiv Detail & Related papers (2022-09-28T18:34:04Z) - On Long-Tailed Phenomena in Neural Machine Translation [50.65273145888896]
State-of-the-art Neural Machine Translation (NMT) models struggle with generating low-frequency tokens.
We propose a new loss function, the Anti-Focal loss, to better adapt model training to the structural dependencies of conditional text generation.
We show the efficacy of the proposed technique on a number of Machine Translation (MT) datasets, demonstrating that it leads to significant gains over cross-entropy.
arXiv Detail & Related papers (2020-10-10T07:00:57Z) - A Constraint-Based Algorithm for the Structural Learning of
Continuous-Time Bayesian Networks [70.88503833248159]
We propose the first constraint-based algorithm for learning the structure of continuous-time Bayesian networks.
We discuss the different statistical tests and the underlying hypotheses used by our proposal to establish conditional independence.
arXiv Detail & Related papers (2020-07-07T07:34:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.