Entanglement assisted training algorithm for supervised quantum
classifiers
- URL: http://arxiv.org/abs/2006.13302v2
- Date: Tue, 22 Sep 2020 15:20:35 GMT
- Title: Entanglement assisted training algorithm for supervised quantum
classifiers
- Authors: Soumik Adhikary
- Abstract summary: We have harnessed the property of quantum entanglement to build a model that can manipulate multiple training samples along with their labels.
A Bell-inequality based cost function is constructed, that can encode errors from multiple samples, simultaneously.
We show that upon minimizing this cost function one can achieve successful classification in benchmark datasets.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new training algorithm for supervised quantum classifiers. Here,
we have harnessed the property of quantum entanglement to build a model that
can simultaneously manipulate multiple training samples along with their
labels. Subsequently a Bell-inequality based cost function is constructed, that
can encode errors from multiple samples, simultaneously, in a way that is not
possible by any classical means. We show that upon minimizing this cost
function one can achieve successful classification in benchmark datasets. The
results presented in this paper are for binary classification problems.
Nevertheless, the analysis can be extended to multi-class classification
problems as well.
Related papers
- An Efficient Quantum Classifier Based on Hamiltonian Representations [50.467930253994155]
Quantum machine learning (QML) is a discipline that seeks to transfer the advantages of quantum computing to data-driven tasks.
We propose an efficient approach that circumvents the costs associated with data encoding by mapping inputs to a finite set of Pauli strings.
We evaluate our approach on text and image classification tasks, against well-established classical and quantum models.
arXiv Detail & Related papers (2025-04-13T11:49:53Z) - Quantification using Permutation-Invariant Networks based on Histograms [47.47360392729245]
Quantification is the supervised learning task in which a model is trained to predict the prevalence of each class in a given bag of examples.
This paper investigates the application of deep neural networks to tasks of quantification in scenarios where it is possible to apply a symmetric supervised approach.
We propose HistNetQ, a novel neural architecture that relies on a permutation-invariant representation based on histograms.
arXiv Detail & Related papers (2024-03-22T11:25:38Z) - Quantum-inspired classification based on quantum state discrimination [0.774229787612056]
We present quantum-inspired algorithms for classification tasks inspired by the problem of quantum state discrimination.
By construction, these algorithms can perform multiclass classification, prevent overfitting, and generate probability outputs.
While they could be implemented on a quantum computer, we focus here on classical implementations of such algorithms.
arXiv Detail & Related papers (2023-03-27T16:09:40Z) - Ensemble-learning variational shallow-circuit quantum classifiers [4.104704267247209]
We propose two ensemble-learning classification methods, namely bootstrap aggregating and adaptive boosting.
The protocols have been exemplified for classical handwriting digits as well as quantum phase discrimination of a symmetry-protected topological Hamiltonian.
arXiv Detail & Related papers (2023-01-30T07:26:35Z) - Generalization Bounds for Few-Shot Transfer Learning with Pretrained
Classifiers [26.844410679685424]
We study the ability of foundation models to learn representations for classification that are transferable to new, unseen classes.
We show that the few-shot error of the learned feature map on new classes is small in case of class-feature-variability collapse.
arXiv Detail & Related papers (2022-12-23T18:46:05Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Multi-class Classification with Fuzzy-feature Observations: Theory and
Algorithms [36.810603503167755]
We propose a novel framework to address a new realistic problem called multi-class classification with imprecise observations (MCIMO)
First, we give the theoretical analysis of the MCIMO problem based on fuzzy Rademacher complexity.
Then, two practical algorithms based on support vector machine and neural networks are constructed to solve the proposed new problem.
arXiv Detail & Related papers (2022-06-09T07:14:00Z) - Cluster-Promoting Quantization with Bit-Drop for Minimizing Network
Quantization Loss [61.26793005355441]
Cluster-Promoting Quantization (CPQ) finds the optimal quantization grids for neural networks.
DropBits is a new bit-drop technique that revises the standard dropout regularization to randomly drop bits instead of neurons.
We experimentally validate our method on various benchmark datasets and network architectures.
arXiv Detail & Related papers (2021-09-05T15:15:07Z) - Quantum Machine Learning with SQUID [64.53556573827525]
We present the Scaled QUantum IDentifier (SQUID), an open-source framework for exploring hybrid Quantum-Classical algorithms for classification problems.
We provide examples of using SQUID in a standard binary classification problem from the popular MNIST dataset.
arXiv Detail & Related papers (2021-04-30T21:34:11Z) - Theoretical Insights Into Multiclass Classification: A High-dimensional
Asymptotic View [82.80085730891126]
We provide the first modernally precise analysis of linear multiclass classification.
Our analysis reveals that the classification accuracy is highly distribution-dependent.
The insights gained may pave the way for a precise understanding of other classification algorithms.
arXiv Detail & Related papers (2020-11-16T05:17:29Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.