Probabilistic Links Between Quantum Classification of Patterns of Boolean Functions and Hamming Distance
- URL: http://arxiv.org/abs/2510.12736v1
- Date: Tue, 14 Oct 2025 17:13:23 GMT
- Title: Probabilistic Links Between Quantum Classification of Patterns of Boolean Functions and Hamming Distance
- Authors: Theodore Andronikos, Constantinos Bitsakos, Konstantinos Nikas, Georgios I. Goumas, Nectarios Koziris,
- Abstract summary: We show that the successful classification probability decreases monotonically with the Hamming distance.<n>We establish that these deviations are not random but are systemic and predictable.<n>We provide for the first time precise Hamming distance intervals for the classification probability.
- Score: 0.7754787045428091
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This article investigates the probabilistic relationship between quantum classification of Boolean functions and their Hamming distance. By integrating concepts from quantum computing, information theory, and combinatorics, we explore how Hamming distance serves as a metric for analyzing deviations in function classification. Our extensive experimental results confirm that the Hamming distance is a pivotal metric for validating nearest neighbors in the process of classifying random functions. One of the significant conclusions we arrived is that the successful classification probability decreases monotonically with the Hamming distance. However, key exceptions were found in specific classes, revealing intra-class heterogeneity. We have established that these deviations are not random but are systemic and predictable. Furthermore, we were able to quantify these irregularities, turning potential errors into manageable phenomena. The most important novelty of this work is the demarcation, for the first time to the best of our knowledge, of precise Hamming distance intervals for the classification probability. These intervals bound the possible values the probability can assume, and provide a new foundational tool for probabilistic assessment in quantum classification. Practitioners can now endorse classification results with high certainty or dismiss them with confidence. This framework can significantly enhance any quantum classification algorithm's reliability and decision-making capability.
Related papers
- Probabilistic Consistency in Machine Learning and Its Connection to Uncertainty Quantification [0.0]
We show that certain types of self-consistent ML models are equivalent to class-conditional probability distributions.<n>This information is sufficient for tasks such as constructing the multiclass Bayes-optimal and estimating inherent uncertainty in the class assignments.
arXiv Detail & Related papers (2025-07-29T10:27:04Z) - Quantum Classification Outside the Promised Class [2.209328709671456]
We investigate whether it is feasible to obtain insights when the input function deviates from the promised class.<n>We use a recently introduced quantum algorithm that is designed to classify with probability $1.0$ using just a single oracular query.<n>We show that, as long as the input function is close enough to the promised class, it is still possible to obtain useful information.
arXiv Detail & Related papers (2025-04-26T11:50:58Z) - To Believe or Not to Believe Your LLM [51.2579827761899]
We explore uncertainty quantification in large language models (LLMs)
We derive an information-theoretic metric that allows to reliably detect when only epistemic uncertainty is large.
We conduct a series of experiments which demonstrate the advantage of our formulation.
arXiv Detail & Related papers (2024-06-04T17:58:18Z) - Postselection-free learning of measurement-induced quantum dynamics [0.0]
We introduce a general-purpose scheme that can be used to infer any property of the post-measurement ensemble of states.
As an immediate application, we show that our method can be used to verify the emergence of quantum state designs in experiments.
arXiv Detail & Related papers (2023-10-06T11:06:06Z) - Analysis of Diagnostics (Part I): Prevalence, Uncertainty Quantification, and Machine Learning [0.0]
This manuscript is the first in a two-part series that studies deeper connections between classification theory and prevalence.
We propose a numerical, homotopy algorithm that estimates the $Bstar (q)$ by minimizing a prevalence-weighted empirical error.
We validate our methods in the context of synthetic data and a research-use-only SARS-CoV-2 enzyme-linked immunosorbent (ELISA) assay.
arXiv Detail & Related papers (2023-08-30T13:26:49Z) - Quantum Conformal Prediction for Reliable Uncertainty Quantification in
Quantum Machine Learning [47.991114317813555]
Quantum models implement implicit probabilistic predictors that produce multiple random decisions for each input through measurement shots.
This paper proposes to leverage such randomness to define prediction sets for both classification and regression that provably capture the uncertainty of the model.
arXiv Detail & Related papers (2023-04-06T22:05:21Z) - Experimental violations of Leggett-Garg's inequalities on a quantum
computer [77.34726150561087]
We experimentally observe the violations of Leggett-Garg-Bell's inequalities on single and multi-qubit systems.
Our analysis highlights the limits of nowadays quantum platforms, showing that the above-mentioned correlation functions deviate from theoretical prediction as the number of qubits and the depth of the circuit grow.
arXiv Detail & Related papers (2021-09-06T14:35:15Z) - Deconfounding Scores: Feature Representations for Causal Effect
Estimation with Weak Overlap [140.98628848491146]
We introduce deconfounding scores, which induce better overlap without biasing the target of estimation.
We show that deconfounding scores satisfy a zero-covariance condition that is identifiable in observed data.
In particular, we show that this technique could be an attractive alternative to standard regularizations.
arXiv Detail & Related papers (2021-04-12T18:50:11Z) - A Unified Joint Maximum Mean Discrepancy for Domain Adaptation [73.44809425486767]
This paper theoretically derives a unified form of JMMD that is easy to optimize.
From the revealed unified JMMD, we illustrate that JMMD degrades the feature-label dependence that benefits to classification.
We propose a novel MMD matrix to promote the dependence, and devise a novel label kernel that is robust to label distribution shift.
arXiv Detail & Related papers (2021-01-25T09:46:14Z) - Tractable Inference in Credal Sentential Decision Diagrams [116.6516175350871]
Probabilistic sentential decision diagrams are logic circuits where the inputs of disjunctive gates are annotated by probability values.
We develop the credal sentential decision diagrams, a generalisation of their probabilistic counterpart that allows for replacing the local probabilities with credal sets of mass functions.
For a first empirical validation, we consider a simple application based on noisy seven-segment display images.
arXiv Detail & Related papers (2020-08-19T16:04:34Z) - Symmetric Informationally Complete Measurements Identify the Irreducible
Difference between Classical and Quantum Systems [0.0]
We describe a general procedure for associating a minimal informationally-complete quantum measurement (or MIC) with a set of linearly independent post-measurement quantum states.
We prove that the representation of the Born Rule obtained from a symmetric informationally-complete measurement (or SIC) minimizes this distinction in at least two senses.
arXiv Detail & Related papers (2018-05-22T16:27:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.