Dimensionality reduction with variational encoders based on subsystem
purification
- URL: http://arxiv.org/abs/2209.09791v2
- Date: Wed, 28 Sep 2022 20:11:47 GMT
- Title: Dimensionality reduction with variational encoders based on subsystem
purification
- Authors: Raja Selvarajan, Manas Sajjan, Travis S. Humble, and Sabre Kais
- Abstract summary: We propose an alternative approach to variational autoencoders to reduce the dimensionality of states represented in higher dimensional Hilbert spaces.
We make use of Bars and Stripes dataset (BAS) for an 8x8 grid to create efficient encoding states and report a classification accuracy of 95%.
- Score: 0.27998963147546135
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Efficient methods for encoding and compression are likely to pave way towards
the problem of efficient trainability on higher dimensional Hilbert spaces
overcoming issues of barren plateaus. Here we propose an alternative approach
to variational autoencoders to reduce the dimensionality of states represented
in higher dimensional Hilbert spaces. To this end we build a variational based
autoencoder circuit that takes as input a dataset and optimizes the parameters
of Parameterized Quantum Circuit (PQC) ansatz to produce an output state that
can be represented as tensor product of 2 subsystems by minimizing Tr(\rho^2).
The output of this circuit is passed through a series of controlled swap gates
and measurements to output a state with half the number of qubits while
retaining the features of the starting state, in the same spirit as any
dimension reduction technique used in classical algorithms. The output obtained
is used for supervised learning to guarantee the working of the encoding
procedure thus developed. We make use of Bars and Stripes dataset (BAS) for an
8x8 grid to create efficient encoding states and report a classification
accuracy of 95% on the same. Thus the demonstrated example shows a proof for
the working of the method in reducing states represented in large Hilbert
spaces while maintaining the features required for any further machine learning
algorithm that follow.
Related papers
- Accelerating Error Correction Code Transformers [56.75773430667148]
We introduce a novel acceleration method for transformer-based decoders.
We achieve a 90% compression ratio and reduce arithmetic operation energy consumption by at least 224 times on modern hardware.
arXiv Detail & Related papers (2024-10-08T11:07:55Z) - Reducing Quantum Error Correction Overhead with Versatile Flag-Sharing Syndrome Extraction Circuits [5.770351255180495]
An efficient error syndrome extraction circuit should use fewer ancillary qubits, quantum gates, and measurements.
We propose to design parallel flagged syndrome extraction with shared flag qubits for quantum stabilizer codes.
arXiv Detail & Related papers (2024-06-30T06:35:48Z) - Rank Reduction Autoencoders -- Enhancing interpolation on nonlinear manifolds [3.180674374101366]
Rank Reduction Autoencoder (RRAE) is an autoencoder with an enlarged latent space.
Two formulations are presented, a strong and a weak one, that build a reduced basis accurately representing the latent space.
We show the efficiency of our formulations by using them for tasks and comparing the results to other autoencoders.
arXiv Detail & Related papers (2024-05-22T20:33:09Z) - Measurement-free fault-tolerant logical zero-state encoding of the
distance-three nine-qubit surface code in a one-dimensional qubit array [0.0]
We propose an efficient encoding method for the distance-three, nine-qubit surface code and show its fault tolerance.
We experimentally demonstrate the logical zero-state encoding of the surface code using a superconducting quantum computer on the cloud.
We numerically show that fault-tolerant encoding of this large code can be achieved by appropriate error detection.
arXiv Detail & Related papers (2023-03-30T08:13:56Z) - Constant Depth Code Deformations in the Parity Architecture [0.0]
We present a protocol to encode and decode arbitrary quantum states in the parity architecture with constant circuit depth.
We show that our method can reduce the depth of implementing the quantum Fourier transform by a factor of two when allowing measurements.
arXiv Detail & Related papers (2023-03-15T13:15:26Z) - Fundamental Limits of Two-layer Autoencoders, and Achieving Them with
Gradient Methods [91.54785981649228]
This paper focuses on non-linear two-layer autoencoders trained in the challenging proportional regime.
Our results characterize the minimizers of the population risk, and show that such minimizers are achieved by gradient methods.
For the special case of a sign activation function, our analysis establishes the fundamental limits for the lossy compression of Gaussian sources via (shallow) autoencoders.
arXiv Detail & Related papers (2022-12-27T12:37:34Z) - Generating quantum feature maps for SVM classifier [0.0]
We present and compare two methods of generating quantum feature maps for quantum-enhanced support vector machine.
The first method is a genetic algorithm with multi-objective fitness function using penalty method, which incorporates maximizing the accuracy of classification.
The second method uses variational quantum circuit, focusing on how to contruct the ansatz based on unitary matrix decomposition.
arXiv Detail & Related papers (2022-07-23T07:28:23Z) - Degradation-Aware Unfolding Half-Shuffle Transformer for Spectral
Compressive Imaging [142.11622043078867]
We propose a principled Degradation-Aware Unfolding Framework (DAUF) that estimates parameters from the compressed image and physical mask, and then uses these parameters to control each iteration.
By plugging HST into DAUF, we establish the first Transformer-based deep unfolding method, Degradation-Aware Unfolding Half-Shuffle Transformer (DAUHST) for HSI reconstruction.
arXiv Detail & Related papers (2022-05-20T11:37:44Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - Bosonic field digitization for quantum computers [62.997667081978825]
We address the representation of lattice bosonic fields in a discretized field amplitude basis.
We develop methods to predict error scaling and present efficient qubit implementation strategies.
arXiv Detail & Related papers (2021-08-24T15:30:04Z) - On Sparsifying Encoder Outputs in Sequence-to-Sequence Models [90.58793284654692]
We take Transformer as the testbed and introduce a layer of gates in-between the encoder and the decoder.
The gates are regularized using the expected value of the sparsity-inducing L0penalty.
We investigate the effects of this sparsification on two machine translation and two summarization tasks.
arXiv Detail & Related papers (2020-04-24T16:57:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.