Non-parametric Active Learning and Rate Reduction in Many-body Hilbert
Space with Rescaled Logarithmic Fidelity
- URL: http://arxiv.org/abs/2107.00195v1
- Date: Thu, 1 Jul 2021 03:13:16 GMT
- Title: Non-parametric Active Learning and Rate Reduction in Many-body Hilbert
Space with Rescaled Logarithmic Fidelity
- Authors: Wei-Ming Li and Shi-Ju Ran
- Abstract summary: In quantum and quantum-inspired machine learning, the very first step is to embed the data in quantum space known as Hilbert space.
We propose the rescaled logarithmic fidelity (RLF) and a non-parametric active learning in the quantum space, which we name as RLF-NAL.
Our results imply that the machine learning in the Hilbert space complies with the principles of maximal coding rate reduction.
- Score: 4.781805457699204
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In quantum and quantum-inspired machine learning, the very first step is to
embed the data in quantum space known as Hilbert space. Developing quantum
kernel function (QKF), which defines the distances among the samples in the
Hilbert space, belongs to the fundamental topics for machine learning. In this
work, we propose the rescaled logarithmic fidelity (RLF) and a non-parametric
active learning in the quantum space, which we name as RLF-NAL. The rescaling
takes advantage of the non-linearity of the kernel to tune the mutual distances
of samples in the Hilbert space, and meanwhile avoids the exponentially-small
fidelities between quantum many-qubit states. We compare RLF-NAL with several
well-known non-parametric algorithms including naive Bayes classifiers,
$k$-nearest neighbors, and spectral clustering. Our method exhibits excellent
accuracy particularly for the unsupervised case with no labeled samples and the
few-shot cases with small numbers of labeled samples. With the visualizations
by t-SNE, our results imply that the machine learning in the Hilbert space
complies with the principles of maximal coding rate reduction, where the
low-dimensional data exhibit within-class compressibility, between-class
discrimination, and overall diversity. Our proposals can be applied to other
quantum and quantum-inspired machine learning, including the methods using the
parametric models such as tensor networks, quantum circuits, and quantum neural
networks.
Related papers
- Extending Quantum Perceptrons: Rydberg Devices, Multi-Class Classification, and Error Tolerance [67.77677387243135]
Quantum Neuromorphic Computing (QNC) merges quantum computation with neural computation to create scalable, noise-resilient algorithms for quantum machine learning (QML)
At the core of QNC is the quantum perceptron (QP), which leverages the analog dynamics of interacting qubits to enable universal quantum computation.
arXiv Detail & Related papers (2024-11-13T23:56:20Z) - Fourier Neural Operators for Learning Dynamics in Quantum Spin Systems [77.88054335119074]
We use FNOs to model the evolution of random quantum spin systems.
We apply FNOs to a compact set of Hamiltonian observables instead of the entire $2n$ quantum wavefunction.
arXiv Detail & Related papers (2024-09-05T07:18:09Z) - Efficient Learning for Linear Properties of Bounded-Gate Quantum Circuits [63.733312560668274]
Given a quantum circuit containing d tunable RZ gates and G-d Clifford gates, can a learner perform purely classical inference to efficiently predict its linear properties?
We prove that the sample complexity scaling linearly in d is necessary and sufficient to achieve a small prediction error, while the corresponding computational complexity may scale exponentially in d.
We devise a kernel-based learning model capable of trading off prediction error and computational complexity, transitioning from exponential to scaling in many practical settings.
arXiv Detail & Related papers (2024-08-22T08:21:28Z) - Non-Unitary Quantum Machine Learning [0.0]
We introduce several probabilistic quantum algorithms that overcome the normal unitary restrictions in quantum machine learning.
We show that residual connections between layers of a variational ansatz can prevent barren plateaus in models which would otherwise contain them.
We also demonstrate a novel rotationally invariant encoding for point cloud data via Schur-Weyl duality.
arXiv Detail & Related papers (2024-05-27T17:42:02Z) - Continuous-variable quantum kernel method on a programmable photonic quantum processor [0.0]
We experimentally prove that the CV quantum kernel method successfully classifies several datasets robustly even under the experimental imperfections.
This demonstration sheds light on the utility of CV quantum systems for QML and should stimulate further study in other CV QML algorithms.
arXiv Detail & Related papers (2024-05-02T08:33:31Z) - The Quantum Path Kernel: a Generalized Quantum Neural Tangent Kernel for
Deep Quantum Machine Learning [52.77024349608834]
Building a quantum analog of classical deep neural networks represents a fundamental challenge in quantum computing.
Key issue is how to address the inherent non-linearity of classical deep learning.
We introduce the Quantum Path Kernel, a formulation of quantum machine learning capable of replicating those aspects of deep machine learning.
arXiv Detail & Related papers (2022-12-22T16:06:24Z) - Noisy Quantum Kernel Machines [58.09028887465797]
An emerging class of quantum learning machines is that based on the paradigm of quantum kernels.
We study how dissipation and decoherence affect their performance.
We show that decoherence and dissipation can be seen as an implicit regularization for the quantum kernel machines.
arXiv Detail & Related papers (2022-04-26T09:52:02Z) - Bosonic field digitization for quantum computers [62.997667081978825]
We address the representation of lattice bosonic fields in a discretized field amplitude basis.
We develop methods to predict error scaling and present efficient qubit implementation strategies.
arXiv Detail & Related papers (2021-08-24T15:30:04Z) - Tree tensor network classifiers for machine learning: from
quantum-inspired to quantum-assisted [0.0]
We describe a quantum-assisted machine learning (QAML) method in which multivariate data is encoded into quantum states in a Hilbert space whose dimension is exponentially large in the length of the data vector.
We present an approach that can be implemented on gate-based quantum computing devices.
arXiv Detail & Related papers (2021-04-06T02:31:48Z) - Efficient and Flexible Approach to Simulate Low-Dimensional Quantum
Lattice Models with Large Local Hilbert Spaces [0.08594140167290096]
We introduce a mapping that allows to construct artificial $U(1)$ symmetries for any type of lattice model.
Exploiting the generated symmetries, numerical expenses that are related to the local degrees of freedom decrease significantly.
Our findings motivate an intuitive physical picture of the truncations occurring in typical algorithms.
arXiv Detail & Related papers (2020-08-19T14:13:56Z) - Quantum embeddings for machine learning [5.16230883032882]
Quantum classifiers are trainable quantum circuits used as machine learning models.
We propose to train the first part of the circuit -- the embedding -- with the objective of maximally separating data classes in Hilbert space.
This approach provides a powerful analytic framework for quantum machine learning.
arXiv Detail & Related papers (2020-01-10T19:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.