Prediction and compression of lattice QCD data using machine learning
algorithms on quantum annealer
- URL: http://arxiv.org/abs/2112.02120v1
- Date: Fri, 3 Dec 2021 19:04:35 GMT
- Title: Prediction and compression of lattice QCD data using machine learning
algorithms on quantum annealer
- Authors: Boram Yoon, Chia Cheng Chang, Garrett T. Kenyon, Nga T.T. Nguyen,
Ermal Rrapaj
- Abstract summary: We present regression and compression algorithms for lattice QCD data.
In the regression algorithm, we encode the correlation between the input and output variables into a sparse coding machine learning algorithm.
In the compression algorithm, we define a mapping from lattice QCD data of floating-point numbers to the binary coefficients that closely reconstruct the input data.
- Score: 4.987315310656657
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present regression and compression algorithms for lattice QCD data
utilizing the efficient binary optimization ability of quantum annealers. In
the regression algorithm, we encode the correlation between the input and
output variables into a sparse coding machine learning algorithm. The trained
correlation pattern is used to predict lattice QCD observables of unseen
lattice configurations from other observables measured on the lattice. In the
compression algorithm, we define a mapping from lattice QCD data of
floating-point numbers to the binary coefficients that closely reconstruct the
input data from a set of basis vectors. Since the reconstruction is not exact,
the mapping defines a lossy compression, but, a reasonably small number of
binary coefficients are able to reconstruct the input vector of lattice QCD
data with the reconstruction error much smaller than the statistical
fluctuation. In both applications, we use D-Wave quantum annealers to solve the
NP-hard binary optimization problems of the machine learning algorithms.
Related papers
- Data Compression using Rank-1 Lattices for Parameter Estimation in Machine Learning [0.0]
Mean squared error and regularized versions of it are standard loss functions in supervised machine learning.
We present algorithms to reduce extensive data sets to a smaller size using rank-1 lattices.
arXiv Detail & Related papers (2024-09-20T12:35:24Z) - Efficient Learning for Linear Properties of Bounded-Gate Quantum Circuits [63.733312560668274]
Given a quantum circuit containing d tunable RZ gates and G-d Clifford gates, can a learner perform purely classical inference to efficiently predict its linear properties?
We prove that the sample complexity scaling linearly in d is necessary and sufficient to achieve a small prediction error, while the corresponding computational complexity may scale exponentially in d.
We devise a kernel-based learning model capable of trading off prediction error and computational complexity, transitioning from exponential to scaling in many practical settings.
arXiv Detail & Related papers (2024-08-22T08:21:28Z) - Quantization of Large Language Models with an Overdetermined Basis [73.79368761182998]
We introduce an algorithm for data quantization based on the principles of Kashin representation.
Our findings demonstrate that Kashin Quantization achieves competitive or superior quality in model performance.
arXiv Detail & Related papers (2024-04-15T12:38:46Z) - Compression of Structured Data with Autoencoders: Provable Benefit of
Nonlinearities and Depth [83.15263499262824]
We prove that gradient descent converges to a solution that completely disregards the sparse structure of the input.
We show how to improve upon Gaussian performance for the compression of sparse data by adding a denoising function to a shallow architecture.
We validate our findings on image datasets, such as CIFAR-10 and MNIST.
arXiv Detail & Related papers (2024-02-07T16:32:29Z) - Variational quantum regression algorithm with encoded data structure [0.21756081703276003]
We construct a quantum regression algorithm wherein the quantum state directly encodes the classical data table.
We show for the first time explicitly how the linkage of the classical data structure can be taken advantage of directly through quantum subroutines.
arXiv Detail & Related papers (2023-07-07T00:30:16Z) - Deep Quantum Error Correction [73.54643419792453]
Quantum error correction codes (QECC) are a key component for realizing the potential of quantum computing.
In this work, we efficiently train novel emphend-to-end deep quantum error decoders.
The proposed method demonstrates the power of neural decoders for QECC by achieving state-of-the-art accuracy.
arXiv Detail & Related papers (2023-01-27T08:16:26Z) - Quantum-parallel vectorized data encodings and computations on
trapped-ions and transmons QPUs [0.3262230127283452]
We introduce two new data encoding schemes, QCrank and QBArt.
QCrank encodes a sequence of real-valued data as rotations of the data qubits, allowing for high storage density.
QBArt embeds a binary representation of the data in the computational basis, requiring fewer quantum measurements.
arXiv Detail & Related papers (2023-01-19T01:26:32Z) - Quantum Extremal Learning [0.8937790536664091]
We propose a quantum algorithm for extremal learning', which is the process of finding the input to a hidden function that extremizes the function output.
The algorithm, called quantum extremal learning (QEL), consists of a parametric quantum circuit that is variationally trained to model data input-output relationships.
arXiv Detail & Related papers (2022-05-05T17:37:26Z) - Benchmarking Small-Scale Quantum Devices on Computing Graph Edit
Distance [52.77024349608834]
Graph Edit Distance (GED) measures the degree of (dis)similarity between two graphs in terms of the operations needed to make them identical.
In this paper we present a comparative study of two quantum approaches to computing GED.
arXiv Detail & Related papers (2021-11-19T12:35:26Z) - Lossy compression of statistical data using quantum annealer [1.433758865948252]
We present a new lossy compression algorithm for statistical floating-point data.
The algorithm finds a set of basis vectors and their binary coefficients that precisely reconstruct the original data.
The compression algorithm is demonstrated on two different datasets of lattice quantum chromodynamics simulations.
arXiv Detail & Related papers (2021-10-05T16:16:41Z) - SreaMRAK a Streaming Multi-Resolution Adaptive Kernel Algorithm [60.61943386819384]
Existing implementations of KRR require that all the data is stored in the main memory.
We propose StreaMRAK - a streaming version of KRR.
We present a showcase study on two synthetic problems and the prediction of the trajectory of a double pendulum.
arXiv Detail & Related papers (2021-08-23T21:03:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.