Training the parametric interactions in an analog bosonic quantum neural network with Fock basis measurement
- URL: http://arxiv.org/abs/2411.19112v2
- Date: Wed, 09 Apr 2025 06:18:22 GMT
- Title: Training the parametric interactions in an analog bosonic quantum neural network with Fock basis measurement
- Authors: Julien Dudas, Baptiste Carles, Elie Gouzien, Julie Grollier, Danijela Marković,
- Abstract summary: Quantum neural networks have the potential to be seamlessly integrated with quantum devices for automatic recognition of quantum states.<n>We propose leveraging bosonic modes and performing Fock basis measurements, enabling the extraction of an exponential number of features relative to the number of modes.<n>We show that the network can be trained even though the number of trainable parameters scales only linearly with the number of modes, whereas the number of neurons grows exponentially.
- Score: 0.9786690381850356
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quantum neural networks have the potential to be seamlessly integrated with quantum devices for the automatic recognition of quantum states. However, performing complex tasks requires a large number of neurons densely connected through trainable, parameterized weights - a challenging feat when using qubits. To address this, we propose leveraging bosonic modes and performing Fock basis measurements, enabling the extraction of an exponential number of features relative to the number of modes. Unlike qubits, bosons can be coupled through multiple parametric drives, with amplitudes, phases, and frequency detunings serving dual purposes: data encoding and trainable parameters. We demonstrate that these parameters, despite their differing physical dimensions, can be trained cohesively using backpropagation to solve benchmark tasks of increasing complexity. Notably, we show that the network can be trained even though the number of trainable parameters scales only linearly with the number of modes, whereas the number of neurons grows exponentially. Furthermore, we show that training not only reduces the number of measurements required for feature extraction compared to untrained quantum neural networks, such as quantum reservoir computing, but also significantly enhances the expressivity of the network, enabling it to solve tasks that are out of reach for quantum reservoir computing.
Related papers
- Quantum Convolutional Neural Network with Flexible Stride [7.362858964229726]
We propose a novel quantum convolutional neural network algorithm.
It can flexibly adjust the stride to accommodate different tasks.
It can achieve exponential acceleration of data scale in less memory compared with its classical counterpart.
arXiv Detail & Related papers (2024-12-01T02:37:06Z) - Simulating Quantum Many-Body States with Neural-Network Exponential Ansatz [0.0]
We develop a surrogate neural network solver that generates the exponential ansatz parameters using the Hamiltonian parameters as inputs.
We illustrate the effectiveness of this approach by training neural networks of several quantum many-body systems.
arXiv Detail & Related papers (2024-11-12T15:48:23Z) - Efficient Learning for Linear Properties of Bounded-Gate Quantum Circuits [63.733312560668274]
Given a quantum circuit containing d tunable RZ gates and G-d Clifford gates, can a learner perform purely classical inference to efficiently predict its linear properties?
We prove that the sample complexity scaling linearly in d is necessary and sufficient to achieve a small prediction error, while the corresponding computational complexity may scale exponentially in d.
We devise a kernel-based learning model capable of trading off prediction error and computational complexity, transitioning from exponential to scaling in many practical settings.
arXiv Detail & Related papers (2024-08-22T08:21:28Z) - Training-efficient density quantum machine learning [2.918930150557355]
Quantum machine learning requires powerful, flexible and efficiently trainable models.
We present density quantum neural networks, a learning model incorporating randomisation over a set of trainable unitaries.
arXiv Detail & Related papers (2024-05-30T16:40:28Z) - A Quantum-Classical Collaborative Training Architecture Based on Quantum
State Fidelity [50.387179833629254]
We introduce a collaborative classical-quantum architecture called co-TenQu.
Co-TenQu enhances a classical deep neural network by up to 41.72% in a fair setting.
It outperforms other quantum-based methods by up to 1.9 times and achieves similar accuracy while utilizing 70.59% fewer qubits.
arXiv Detail & Related papers (2024-02-23T14:09:41Z) - Enhancing the expressivity of quantum neural networks with residual
connections [0.0]
We propose a quantum circuit-based algorithm to implement quantum residual neural networks (QResNets)
Our work lays the foundation for a complete quantum implementation of the classical residual neural networks.
arXiv Detail & Related papers (2024-01-29T04:00:51Z) - Multimodal deep representation learning for quantum cross-platform
verification [60.01590250213637]
Cross-platform verification, a critical undertaking in the realm of early-stage quantum computing, endeavors to characterize the similarity of two imperfect quantum devices executing identical algorithms.
We introduce an innovative multimodal learning approach, recognizing that the formalism of data in this task embodies two distinct modalities.
We devise a multimodal neural network to independently extract knowledge from these modalities, followed by a fusion operation to create a comprehensive data representation.
arXiv Detail & Related papers (2023-11-07T04:35:03Z) - Determining the ability for universal quantum computing: Testing
controllability via dimensional expressivity [39.58317527488534]
Controllability tests can be used in the design of quantum devices to reduce the number of external controls.
We devise a hybrid quantum-classical algorithm based on a parametrized quantum circuit.
arXiv Detail & Related papers (2023-08-01T15:33:41Z) - Neural networks for Bayesian quantum many-body magnetometry [0.0]
Entangled quantum many-body systems can be used as sensors that enable the estimation of parameters with a precision larger than that achievable with ensembles of individual quantum detectors.
This entails a complexity that can hinder the applicability of Bayesian inference techniques.
We show how to circumvent these issues by using neural networks that faithfully reproduce the dynamics of quantum many-body sensors.
arXiv Detail & Related papers (2022-12-22T22:13:49Z) - Quantum reservoir neural network implementation on coherently coupled
quantum oscillators [1.7086737326992172]
We propose an implementation for quantum reservoir that obtains a large number of densely connected neurons.
We analyse a specific hardware implementation based on superconducting circuits.
We obtain state-of-the-art accuracy of 99 % on benchmark tasks.
arXiv Detail & Related papers (2022-09-07T15:24:51Z) - An Amplitude-Based Implementation of the Unit Step Function on a Quantum
Computer [0.0]
We introduce an amplitude-based implementation for approximating non-linearity in the form of the unit step function on a quantum computer.
We describe two distinct circuit types which receive their input either directly from a classical computer, or as a quantum state when embedded in a more advanced quantum algorithm.
arXiv Detail & Related papers (2022-06-07T07:14:12Z) - Quantum Annealing Formulation for Binary Neural Networks [40.99969857118534]
In this work, we explore binary neural networks, which are lightweight yet powerful models typically intended for resource constrained devices.
We devise a quadratic unconstrained binary optimization formulation for the training problem.
While the problem is intractable, i.e., the cost to estimate the binary weights scales exponentially with network size, we show how the problem can be optimized directly on a quantum annealer.
arXiv Detail & Related papers (2021-07-05T03:20:54Z) - Searching for Low-Bit Weights in Quantized Neural Networks [129.8319019563356]
Quantized neural networks with low-bit weights and activations are attractive for developing AI accelerators.
We present to regard the discrete weights in an arbitrary quantized neural network as searchable variables, and utilize a differential method to search them accurately.
arXiv Detail & Related papers (2020-09-18T09:13:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.