Parallel Hybrid Networks: an interplay between quantum and classical
neural networks
- URL: http://arxiv.org/abs/2303.03227v2
- Date: Wed, 1 Nov 2023 14:07:10 GMT
- Title: Parallel Hybrid Networks: an interplay between quantum and classical
neural networks
- Authors: Mo Kordzanganeh, Daria Kosichkina, Alexey Melnikov
- Abstract summary: We introduce a new, interpretable class of hybrid quantum neural networks that pass the inputs of the dataset in parallel.
We demonstrate this claim on two synthetic datasets sampled from periodic distributions with added protrusions as noise.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantum neural networks represent a new machine learning paradigm that has
recently attracted much attention due to its potential promise. Under certain
conditions, these models approximate the distribution of their dataset with a
truncated Fourier series. The trigonometric nature of this fit could result in
angle-embedded quantum neural networks struggling to fit the non-harmonic
features in a given dataset. Moreover, the interpretability of neural networks
remains a challenge. In this work, we introduce a new, interpretable class of
hybrid quantum neural networks that pass the inputs of the dataset in parallel
to 1) a classical multi-layered perceptron and 2) a variational quantum
circuit, and then the outputs of the two are linearly combined. We observe that
the quantum neural network creates a smooth sinusoidal foundation base on the
training set, and then the classical perceptrons fill the non-harmonic gaps in
the landscape. We demonstrate this claim on two synthetic datasets sampled from
periodic distributions with added protrusions as noise. The training results
indicate that the parallel hybrid network architecture could improve the
solution optimality on periodic datasets with additional noise.
Related papers
- Fourier Neural Operators for Learning Dynamics in Quantum Spin Systems [77.88054335119074]
We use FNOs to model the evolution of random quantum spin systems.
We apply FNOs to a compact set of Hamiltonian observables instead of the entire $2n$ quantum wavefunction.
arXiv Detail & Related papers (2024-09-05T07:18:09Z) - Dissipation-driven quantum generative adversarial networks [11.833077116494929]
We introduce a novel dissipation-driven quantum generative adversarial network (DQGAN) architecture specifically tailored for generating classical data.
The classical data is encoded into the input qubits of the input layer via strong tailored dissipation processes.
We extract both the generated data and the classification results by measuring the observables of the steady state of the output qubits.
arXiv Detail & Related papers (2024-08-28T07:41:58Z) - Training-efficient density quantum machine learning [2.918930150557355]
Quantum machine learning requires powerful, flexible and efficiently trainable models.
We present density quantum neural networks, a learning model incorporating randomisation over a set of trainable unitaries.
arXiv Detail & Related papers (2024-05-30T16:40:28Z) - Enhancing the expressivity of quantum neural networks with residual
connections [0.0]
We propose a quantum circuit-based algorithm to implement quantum residual neural networks (QResNets)
Our work lays the foundation for a complete quantum implementation of the classical residual neural networks.
arXiv Detail & Related papers (2024-01-29T04:00:51Z) - A Scalable Walsh-Hadamard Regularizer to Overcome the Low-degree
Spectral Bias of Neural Networks [79.28094304325116]
Despite the capacity of neural nets to learn arbitrary functions, models trained through gradient descent often exhibit a bias towards simpler'' functions.
We show how this spectral bias towards low-degree frequencies can in fact hurt the neural network's generalization on real-world datasets.
We propose a new scalable functional regularization scheme that aids the neural network to learn higher degree frequencies.
arXiv Detail & Related papers (2023-05-16T20:06:01Z) - Quantum HyperNetworks: Training Binary Neural Networks in Quantum
Superposition [16.1356415877484]
We introduce quantum hypernetworks as a mechanism to train binary neural networks on quantum computers.
We show that our approach effectively finds optimal parameters, hyperparameters and architectural choices with high probability on classification problems.
Our unified approach provides an immense scope for other applications in the field of machine learning.
arXiv Detail & Related papers (2023-01-19T20:06:48Z) - A Classical-Quantum Convolutional Neural Network for Detecting Pneumonia
from Chest Radiographs [0.0]
We show how a variational quantum circuit could be integrated into a classical neural network for the problem of detecting pneumonia from chest radiographs.
We train both networks on an image dataset containing chest radiographs and benchmark their performance.
We show that the hybrid network outperforms the classical network on different performance measures, and that these improvements are statistically significant.
arXiv Detail & Related papers (2022-02-19T05:13:37Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - LocalDrop: A Hybrid Regularization for Deep Neural Networks [98.30782118441158]
We propose a new approach for the regularization of neural networks by the local Rademacher complexity called LocalDrop.
A new regularization function for both fully-connected networks (FCNs) and convolutional neural networks (CNNs) has been developed based on the proposed upper bound of the local Rademacher complexity.
arXiv Detail & Related papers (2021-03-01T03:10:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.