Learning Density Functionals from Noisy Quantum Data
- URL: http://arxiv.org/abs/2409.02921v1
- Date: Wed, 4 Sep 2024 17:59:55 GMT
- Title: Learning Density Functionals from Noisy Quantum Data
- Authors: Emiel Koridon, Felix Frohnert, Eric Prehn, Evert van Nieuwenburg, Jordi Tura, Stefano Polla,
- Abstract summary: noisy intermediate-scale quantum (NISQ) devices are used to generate training data for machine learning (ML) models.
We show that a neural-network ML model can successfully generalize from small datasets subject to noise typical of NISQ algorithms.
Our findings suggest a promising pathway for leveraging NISQ devices in practical quantum simulations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The search for useful applications of noisy intermediate-scale quantum (NISQ) devices in quantum simulation has been hindered by their intrinsic noise and the high costs associated with achieving high accuracy. A promising approach to finding utility despite these challenges involves using quantum devices to generate training data for classical machine learning (ML) models. In this study, we explore the use of noisy data generated by quantum algorithms in training an ML model to learn a density functional for the Fermi-Hubbard model. We benchmark various ML models against exact solutions, demonstrating that a neural-network ML model can successfully generalize from small datasets subject to noise typical of NISQ algorithms. The learning procedure can effectively filter out unbiased sampling noise, resulting in a trained model that outperforms any individual training data point. Conversely, when trained on data with expressibility and optimization error typical of the variational quantum eigensolver, the model replicates the biases present in the training data. The trained models can be applied to solving new problem instances in a Kohn-Sham-like density optimization scheme, benefiting from automatic differentiability and achieving reasonably accurate solutions on most problem instances. Our findings suggest a promising pathway for leveraging NISQ devices in practical quantum simulations, highlighting both the potential benefits and the challenges that need to be addressed for successful integration of quantum computing and ML techniques.
Related papers
- Fourier Neural Operators for Learning Dynamics in Quantum Spin Systems [77.88054335119074]
We use FNOs to model the evolution of random quantum spin systems.
We apply FNOs to a compact set of Hamiltonian observables instead of the entire $2n$ quantum wavefunction.
arXiv Detail & Related papers (2024-09-05T07:18:09Z) - Quantum-Train: Rethinking Hybrid Quantum-Classical Machine Learning in the Model Compression Perspective [7.7063925534143705]
We introduce the Quantum-Train(QT) framework, a novel approach that integrates quantum computing with machine learning algorithms.
QT achieves remarkable results by employing a quantum neural network alongside a classical mapping model.
arXiv Detail & Related papers (2024-05-18T14:35:57Z) - Flexible Error Mitigation of Quantum Processes with Data Augmentation
Empowered Neural Model [9.857921247636451]
We propose a data augmentation empowered neural model for error mitigation (DAEM)
Our model does not require any prior knowledge about the specific noise type and measurement settings.
It can estimate noise-free statistics solely from the noisy measurement results of the target quantum process.
arXiv Detail & Related papers (2023-11-03T05:52:14Z) - Probabilistic Sampling of Balanced K-Means using Adiabatic Quantum Computing [93.83016310295804]
AQCs allow to implement problems of research interest, which has sparked the development of quantum representations for computer vision tasks.
In this work, we explore the potential of using this information for probabilistic balanced k-means clustering.
Instead of discarding non-optimal solutions, we propose to use them to compute calibrated posterior probabilities with little additional compute cost.
This allows us to identify ambiguous solutions and data points, which we demonstrate on a D-Wave AQC on synthetic tasks and real visual data.
arXiv Detail & Related papers (2023-10-18T17:59:45Z) - Quantum support vector data description for anomaly detection [0.5439020425819]
Anomaly detection is a critical problem in data analysis and pattern recognition, finding applications in various domains.
We introduce quantum support vector data description (QSVDD), an unsupervised learning algorithm designed for anomaly detection.
arXiv Detail & Related papers (2023-10-10T07:35:09Z) - Drastic Circuit Depth Reductions with Preserved Adversarial Robustness
by Approximate Encoding for Quantum Machine Learning [0.5181797490530444]
We implement methods for the efficient preparation of quantum states representing encoded image data using variational, genetic and matrix product state based algorithms.
Results show that these methods can approximately prepare states to a level suitable for QML using circuits two orders of magnitude shallower than a standard state preparation implementation.
arXiv Detail & Related papers (2023-09-18T01:49:36Z) - A didactic approach to quantum machine learning with a single qubit [68.8204255655161]
We focus on the case of learning with a single qubit, using data re-uploading techniques.
We implement the different proposed formulations in toy and real-world datasets using the qiskit quantum computing SDK.
arXiv Detail & Related papers (2022-11-23T18:25:32Z) - Generalization Metrics for Practical Quantum Advantage in Generative
Models [68.8204255655161]
Generative modeling is a widely accepted natural use case for quantum computers.
We construct a simple and unambiguous approach to probe practical quantum advantage for generative modeling by measuring the algorithm's generalization performance.
Our simulation results show that our quantum-inspired models have up to a $68 times$ enhancement in generating unseen unique and valid samples.
arXiv Detail & Related papers (2022-01-21T16:35:35Z) - Mixed Precision Low-bit Quantization of Neural Network Language Models
for Speech Recognition [67.95996816744251]
State-of-the-art language models (LMs) represented by long-short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming increasingly complex and expensive for practical applications.
Current quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of LMs to quantization errors.
Novel mixed precision neural network LM quantization methods are proposed in this paper.
arXiv Detail & Related papers (2021-11-29T12:24:02Z) - Quantum-tailored machine-learning characterization of a superconducting
qubit [50.591267188664666]
We develop an approach to characterize the dynamics of a quantum device and learn device parameters.
This approach outperforms physics-agnostic recurrent neural networks trained on numerically generated and experimental data.
This demonstration shows how leveraging domain knowledge improves the accuracy and efficiency of this characterization task.
arXiv Detail & Related papers (2021-06-24T15:58:57Z) - Modeling Noisy Quantum Circuits Using Experimental Characterization [0.40611352512781856]
Noisy intermediate-scale quantum (NISQ) devices offer unique platforms to test and evaluate the behavior of non-fault-tolerant quantum computing.
We present a test-driven approach to characterizing NISQ programs that manages the complexity of noisy circuit modeling.
arXiv Detail & Related papers (2020-01-23T16:45:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.