An Exponential Reduction in Training Data Sizes for Machine Learning
Derived Entanglement Witnesses
- URL: http://arxiv.org/abs/2311.18162v2
- Date: Wed, 28 Feb 2024 03:40:50 GMT
- Title: An Exponential Reduction in Training Data Sizes for Machine Learning
Derived Entanglement Witnesses
- Authors: Aiden R. Rosebush, Alexander C. B. Greenwood, Brian T. Kirby, Li Qian
- Abstract summary: We propose a support vector machine (SVM) based approach for generating an entanglement witness.
For $N$ qubits, the SVM portion of this approach requires only $O(6N)$ training states, whereas an existing method needs $O(24N)$.
- Score: 45.17332714965704
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a support vector machine (SVM) based approach for generating an
entanglement witness that requires exponentially less training data than
previously proposed methods. SVMs generate hyperplanes represented by a
weighted sum of expectation values of local observables whose coefficients are
optimized to sum to a positive number for all separable states and a negative
number for as many entangled states as possible near a specific target state.
Previous SVM-based approaches for entanglement witness generation used large
amounts of randomly generated separable states to perform training, a task with
considerable computational overhead. Here, we propose a method for orienting
the witness hyperplane using only the significantly smaller set of states
consisting of the eigenstates of the generalized Pauli matrices and a set of
entangled states near the target entangled states. With the orientation of the
witness hyperplane set by the SVM, we tune the plane's placement using a
differential program that ensures perfect classification accuracy on a limited
test set as well as maximal noise tolerance. For $N$ qubits, the SVM portion of
this approach requires only $O(6^N)$ training states, whereas an existing
method needs $O(2^{4^N})$. We use this method to construct witnesses of 4 and 5
qubit GHZ states with coefficients agreeing with stabilizer formalism witnesses
to within 6.5 percent and 1 percent, respectively. We also use the same
training states to generate novel 4 and 5 qubit W state witnesses. Finally, we
computationally verify these witnesses on small test sets and propose methods
for further verification.
Related papers
- A Little Confidence Goes a Long Way [3.6371715211657243]
We introduce a group of related methods for binary classification tasks using probes of the hidden state activations in large language models (LLMs)
Performance is on par with the largest and most advanced LLMs currently available, but requiring orders of magnitude fewer computational resources and not requiring labeled data.
arXiv Detail & Related papers (2024-08-20T23:36:00Z) - A Safe Screening Rule with Bi-level Optimization of $\nu$ Support Vector
Machine [15.096652880354199]
We propose a safe screening rule with bi-level optimization for $nu$-SVM.
Our SRBO-$nu$-SVM is strictly deduced by integrating the Karush-Kuhn-Tucker conditions.
We also develop an efficient dual coordinate descent method (DCDM) to further improve computational speed.
arXiv Detail & Related papers (2024-03-04T06:55:57Z) - Efficient Verification-Based Face Identification [50.616875565173274]
We study the problem of performing face verification with an efficient neural model $f$.
Our model leads to a substantially small $f$ requiring only 23k parameters and 5M floating point operations (FLOPS)
We use six face verification datasets to demonstrate that our method is on par or better than state-of-the-art models.
arXiv Detail & Related papers (2023-12-20T18:08:02Z) - Intersection of Parallels as an Early Stopping Criterion [64.8387564654474]
We propose a method to spot an early stopping point in the training iterations without the need for a validation set.
For a wide range of learning rates, our method, called Cosine-Distance Criterion (CDC), leads to better generalization on average than all the methods that we compare against.
arXiv Detail & Related papers (2022-08-19T19:42:41Z) - Value-Consistent Representation Learning for Data-Efficient
Reinforcement Learning [105.70602423944148]
We propose a novel method, called value-consistent representation learning (VCR), to learn representations that are directly related to decision-making.
Instead of aligning this imagined state with a real state returned by the environment, VCR applies a $Q$-value head on both states and obtains two distributions of action values.
It has been demonstrated that our methods achieve new state-of-the-art performance for search-free RL algorithms.
arXiv Detail & Related papers (2022-06-25T03:02:25Z) - Classification of four-qubit entangled states via Machine Learning [0.0]
We apply the support vector machine (SVM) algorithm to derive a set of entanglement witnesses (EW)
EW identifies entanglement patterns in families of four-qubit states.
We numerically verify that the SVM approach provides an effective tool to address the entanglement witness problem.
arXiv Detail & Related papers (2022-05-21T18:13:20Z) - Machine-Learning-Derived Entanglement Witnesses [55.76279816849472]
We show a correspondence between linear support vector machines (SVMs) and entanglement witnesses.
We use this correspondence to generate entanglement witnesses for bipartite and tripartite qubit (and qudit) target entangled states.
arXiv Detail & Related papers (2021-07-05T22:28:02Z) - Estimation of pure states using three measurement bases [0.0]
We introduce a new method to estimate unknown pure $d$-dimensional quantum states using the probability distributions associated with only three measurement bases.
The viability of the protocol is experimentally demonstrated using two different and complementary high-dimensional quantum information platforms.
arXiv Detail & Related papers (2020-06-05T03:28:51Z) - The Right Tool for the Job: Matching Model and Instance Complexities [62.95183777679024]
As NLP models become larger, executing a trained model requires significant computational resources incurring monetary and environmental costs.
We propose a modification to contextual representation fine-tuning which, during inference, allows for an early (and fast) "exit"
We test our proposed modification on five different datasets in two tasks: three text classification datasets and two natural language inference benchmarks.
arXiv Detail & Related papers (2020-04-16T04:28:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.