Gaussian boson sampling and multi-particle event optimization by machine
learning in the quantum phase space
- URL: http://arxiv.org/abs/2102.12142v1
- Date: Wed, 24 Feb 2021 09:08:15 GMT
- Title: Gaussian boson sampling and multi-particle event optimization by machine
learning in the quantum phase space
- Authors: Claudio Conti
- Abstract summary: We use neural networks to represent the characteristic function of many-body Gaussian states in the quantum phase space.
We compute boson pattern probabilities by automatic differentiation.
The results are potentially useful to the creation of new sources and complex circuits for quantum technologies.
- Score: 0.11421942894219898
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We use neural networks to represent the characteristic function of many-body
Gaussian states in the quantum phase space. By a pullback mechanism, we model
transformations due to unitary operators as linear layers that can be cascaded
to simulate complex multi-particle processes. We use the layered neural
networks for non-classical light propagation in random interferometers, and
compute boson pattern probabilities by automatic differentiation. We also
demonstrate that multi-particle events in Gaussian boson sampling can be
optimized by a proper design and training of the neural network weights. The
results are potentially useful to the creation of new sources and complex
circuits for quantum technologies.
Related papers
- Fourier Neural Operators for Learning Dynamics in Quantum Spin Systems [77.88054335119074]
We use FNOs to model the evolution of random quantum spin systems.
We apply FNOs to a compact set of Hamiltonian observables instead of the entire $2n$ quantum wavefunction.
arXiv Detail & Related papers (2024-09-05T07:18:09Z) - Efficient Learning for Linear Properties of Bounded-Gate Quantum Circuits [63.733312560668274]
Given a quantum circuit containing d tunable RZ gates and G-d Clifford gates, can a learner perform purely classical inference to efficiently predict its linear properties?
We prove that the sample complexity scaling linearly in d is necessary and sufficient to achieve a small prediction error, while the corresponding computational complexity may scale exponentially in d.
We devise a kernel-based learning model capable of trading off prediction error and computational complexity, transitioning from exponential to scaling in many practical settings.
arXiv Detail & Related papers (2024-08-22T08:21:28Z) - Towards Efficient Quantum Hybrid Diffusion Models [68.43405413443175]
We propose a new methodology to design quantum hybrid diffusion models.
We propose two possible hybridization schemes combining quantum computing's superior generalization with classical networks' modularity.
arXiv Detail & Related papers (2024-02-25T16:57:51Z) - Enhancing the expressivity of quantum neural networks with residual
connections [0.0]
We propose a quantum circuit-based algorithm to implement quantum residual neural networks (QResNets)
Our work lays the foundation for a complete quantum implementation of the classical residual neural networks.
arXiv Detail & Related papers (2024-01-29T04:00:51Z) - Complexity of Gaussian quantum optics with a limited number of
non-linearities [4.532517021515834]
We show that computing transition amplitudes of Gaussian processes with a single-layer of non-linearities is hard for classical computers.
We show how an efficient algorithm to solve this problem could be used to efficiently approximate outcome probabilities of a Gaussian boson sampling experiment.
arXiv Detail & Related papers (2023-10-09T18:00:04Z) - Towards Neural Variational Monte Carlo That Scales Linearly with System
Size [67.09349921751341]
Quantum many-body problems are central to demystifying some exotic quantum phenomena, e.g., high-temperature superconductors.
The combination of neural networks (NN) for representing quantum states, and the Variational Monte Carlo (VMC) algorithm, has been shown to be a promising method for solving such problems.
We propose a NN architecture called Vector-Quantized Neural Quantum States (VQ-NQS) that utilizes vector-quantization techniques to leverage redundancies in the local-energy calculations of the VMC algorithm.
arXiv Detail & Related papers (2022-12-21T19:00:04Z) - The Hintons in your Neural Network: a Quantum Field Theory View of Deep
Learning [84.33745072274942]
We show how to represent linear and non-linear layers as unitary quantum gates, and interpret the fundamental excitations of the quantum model as particles.
On top of opening a new perspective and techniques for studying neural networks, the quantum formulation is well suited for optical quantum computing.
arXiv Detail & Related papers (2021-03-08T17:24:29Z) - Quantum Markov Chain Monte Carlo with Digital Dissipative Dynamics on
Quantum Computers [52.77024349608834]
We develop a digital quantum algorithm that simulates interaction with an environment using a small number of ancilla qubits.
We evaluate the algorithm by simulating thermal states of the transverse Ising model.
arXiv Detail & Related papers (2021-03-04T18:21:00Z) - Learnability and Complexity of Quantum Samples [26.425493366198207]
Given a quantum circuit, a quantum computer can sample the output distribution exponentially faster in the number of bits than classical computers.
Can we learn the underlying quantum distribution using models with training parameters that scale in n under a fixed training time?
We study four kinds of generative models: Deep Boltzmann machine (DBM), Generative Adrial Networks (GANs), Long Short-Term Memory (LSTM) and Autoregressive GAN, on learning quantum data set generated by deep random circuits.
arXiv Detail & Related papers (2020-10-22T18:45:25Z) - Autoregressive Transformer Neural Network for Simulating Open Quantum Systems via a Probabilistic Formulation [5.668795025564699]
We present an approach for tackling open quantum system dynamics.
We compactly represent quantum states with autoregressive transformer neural networks.
Efficient algorithms have been developed to simulate the dynamics of the Liouvillian superoperator.
arXiv Detail & Related papers (2020-09-11T18:00:00Z) - Recurrent Quantum Neural Networks [7.6146285961466]
Recurrent neural networks are the foundation of many sequence-to-sequence models in machine learning.
We construct a quantum recurrent neural network (QRNN) with demonstrable performance on non-trivial tasks.
We evaluate the QRNN on MNIST classification, both by feeding the QRNN each image pixel-by-pixel; and by utilising modern data augmentation as preprocessing step.
arXiv Detail & Related papers (2020-06-25T17:59:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.