Anomaly detection with variational quantum generative adversarial
networks
- URL: http://arxiv.org/abs/2010.10492v2
- Date: Wed, 21 Jul 2021 08:45:53 GMT
- Title: Anomaly detection with variational quantum generative adversarial
networks
- Authors: Daniel Herr, Benjamin Obert, Matthias Rosenkranz
- Abstract summary: Generative adversarial networks (GANs) are a machine learning framework comprising a generative model for sampling from a target distribution.
We introduce variational quantum-classical Wasserstein GANs to address these issues and embed this model in a classical machine learning framework for anomaly detection.
Our model replaces the generator of Wasserstein GANs with a hybrid quantum-classical neural net and leaves the classical discriminative model unchanged.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative adversarial networks (GANs) are a machine learning framework
comprising a generative model for sampling from a target distribution and a
discriminative model for evaluating the proximity of a sample to the target
distribution. GANs exhibit strong performance in imaging or anomaly detection.
However, they suffer from training instabilities, and sampling efficiency may
be limited by the classical sampling procedure. We introduce variational
quantum-classical Wasserstein GANs to address these issues and embed this model
in a classical machine learning framework for anomaly detection. Classical
Wasserstein GANs improve training stability by using a cost function better
suited for gradient descent. Our model replaces the generator of Wasserstein
GANs with a hybrid quantum-classical neural net and leaves the classical
discriminative model unchanged. This way, high-dimensional classical data only
enters the classical model and need not be prepared in a quantum circuit. We
demonstrate the effectiveness of this method on a credit card fraud dataset.
For this dataset our method shows performance on par with classical methods in
terms of the $F_1$ score. We analyze the influence of the circuit ansatz, layer
width and depth, neural net architecture parameter initialization strategy, and
sampling noise on convergence and performance.
Related papers
- Model-Based Qubit Noise Spectroscopy [0.0]
We derive model-based QNS approaches using inspiration from classical signal processing.
We show, through both simulation and experimental data, how these model-based QNS approaches maintain the statistical and computational benefits of their classical counterparts.
arXiv Detail & Related papers (2024-05-20T09:30:38Z) - Scaling and renormalization in high-dimensional regression [72.59731158970894]
This paper presents a succinct derivation of the training and generalization performance of a variety of high-dimensional ridge regression models.
We provide an introduction and review of recent results on these topics, aimed at readers with backgrounds in physics and deep learning.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - A Comparative Analysis of Adversarial Robustness for Quantum and Classical Machine Learning Models [0.0]
We show how to investigate the similarities and differences in adversarial robustness of classical and quantum models.
Our findings show that a classical approximation of QML circuits can be seen as a "middle ground" on the quantum-classical boundary.
arXiv Detail & Related papers (2024-04-24T19:20:15Z) - Unsupervised textile defect detection using convolutional neural
networks [0.0]
We propose a novel motif-based approach for unsupervised textile anomaly detection.
It combines the benefits of traditional convolutional neural networks with those of an unsupervised learning paradigm.
We demonstrate the effectiveness of our approach on the Patterned Fabrics benchmark dataset.
arXiv Detail & Related papers (2023-11-30T22:08:06Z) - Domain Generalization Guided by Gradient Signal to Noise Ratio of
Parameters [69.24377241408851]
Overfitting to the source domain is a common issue in gradient-based training of deep neural networks.
We propose to base the selection on gradient-signal-to-noise ratio (GSNR) of network's parameters.
arXiv Detail & Related papers (2023-10-11T10:21:34Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Deep Learning for the Benes Filter [91.3755431537592]
We present a new numerical method based on the mesh-free neural network representation of the density of the solution of the Benes model.
We discuss the role of nonlinearity in the filtering model equations for the choice of the domain of the neural network.
arXiv Detail & Related papers (2022-03-09T14:08:38Z) - Quantum Machine Learning with SQUID [64.53556573827525]
We present the Scaled QUantum IDentifier (SQUID), an open-source framework for exploring hybrid Quantum-Classical algorithms for classification problems.
We provide examples of using SQUID in a standard binary classification problem from the popular MNIST dataset.
arXiv Detail & Related papers (2021-04-30T21:34:11Z) - A Distributed Optimisation Framework Combining Natural Gradient with
Hessian-Free for Discriminative Sequence Training [16.83036203524611]
This paper presents a novel natural gradient and Hessian-free (NGHF) optimisation framework for neural network training.
It relies on the linear conjugate gradient (CG) algorithm to combine the natural gradient (NG) method with local curvature information from Hessian-free (HF) or other second-order methods.
Experiments are reported on the multi-genre broadcast data set for a range of different acoustic model types.
arXiv Detail & Related papers (2021-03-12T22:18:34Z) - Kernel and Rich Regimes in Overparametrized Models [69.40899443842443]
We show that gradient descent on overparametrized multilayer networks can induce rich implicit biases that are not RKHS norms.
We also demonstrate this transition empirically for more complex matrix factorization models and multilayer non-linear networks.
arXiv Detail & Related papers (2020-02-20T15:43:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.