Chaotic Variational Auto encoder-based Adversarial Machine Learning
- URL: http://arxiv.org/abs/2302.12959v1
- Date: Sat, 25 Feb 2023 02:06:15 GMT
- Title: Chaotic Variational Auto encoder-based Adversarial Machine Learning
- Authors: Pavan Venkata Sainadh Reddy, Yelleti Vivek, Gopi Pranay, Vadlamani
Ravi
- Abstract summary: We propose a novel attack mechanism based on the adversarial sample generation by Variational Auto (VAE)
We performed both Evasion and Data-Poison attacks on Logistic Regression (LR) and Decision Tree (DT) models.
Results indicated that VAE-Deep-WNN outperformed the rest in the majority of the datasets and models.
- Score: 3.773653335175799
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Machine Learning (ML) has become the new contrivance in almost every field.
This makes them a target of fraudsters by various adversary attacks, thereby
hindering the performance of ML models. Evasion and Data-Poison-based attacks
are well acclaimed, especially in finance, healthcare, etc. This motivated us
to propose a novel computationally less expensive attack mechanism based on the
adversarial sample generation by Variational Auto Encoder (VAE). It is well
known that Wavelet Neural Network (WNN) is considered computationally efficient
in solving image and audio processing, speech recognition, and time-series
forecasting. This paper proposed VAE-Deep-Wavelet Neural Network
(VAE-Deep-WNN), where Encoder and Decoder employ WNN networks. Further, we
proposed chaotic variants of both VAE with Multi-layer perceptron (MLP) and
Deep-WNN and named them C-VAE-MLP and C-VAE-Deep-WNN, respectively. Here, we
employed a Logistic map to generate random noise in the latent space. In this
paper, we performed VAE-based adversary sample generation and applied it to
various problems related to finance and cybersecurity domain-related problems
such as loan default, credit card fraud, and churn modelling, etc., We
performed both Evasion and Data-Poison attacks on Logistic Regression (LR) and
Decision Tree (DT) models. The results indicated that VAE-Deep-WNN outperformed
the rest in the majority of the datasets and models. However, its chaotic
variant C-VAE-Deep-WNN performed almost similarly to VAE-Deep-WNN in the
majority of the datasets.
Related papers
- VQUNet: Vector Quantization U-Net for Defending Adversarial Atacks by Regularizing Unwanted Noise [0.5755004576310334]
We introduce a novel noise-reduction procedure, Vector Quantization U-Net (VQUNet), to reduce adversarial noise and reconstruct data with high fidelity.
VQUNet features a discrete latent representation learning through a multi-scale hierarchical structure for both noise reduction and data reconstruction.
It outperforms other state-of-the-art noise-reduction-based defense methods under various adversarial attacks for both Fashion-MNIST and CIFAR10 datasets.
arXiv Detail & Related papers (2024-06-05T10:10:03Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Hardware-aware training for large-scale and diverse deep learning
inference workloads using in-memory computing-based accelerators [7.152059921639833]
We show that many large-scale deep neural networks can be successfully retrained to show iso-accuracy on AIMC.
Our results suggest that AIMC nonidealities that add noise to the inputs or outputs, not the weights, have the largest impact on DNN accuracy.
arXiv Detail & Related papers (2023-02-16T18:25:06Z) - Robust and Lossless Fingerprinting of Deep Neural Networks via Pooled
Membership Inference [17.881686153284267]
Deep neural networks (DNNs) have already achieved great success in a lot of application areas and brought profound changes to our society.
How to protect the intellectual property (IP) of DNNs against infringement is one of the most important yet very challenging topics.
This paper proposes a novel technique called emphpooled membership inference (PMI) so as to protect the IP of the DNN models.
arXiv Detail & Related papers (2022-09-09T04:06:29Z) - RL-DistPrivacy: Privacy-Aware Distributed Deep Inference for low latency
IoT systems [41.1371349978643]
We present an approach that targets the security of collaborative deep inference via re-thinking the distribution strategy.
We formulate this methodology, as an optimization, where we establish a trade-off between the latency of co-inference and the privacy-level of data.
arXiv Detail & Related papers (2022-08-27T14:50:00Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Defending Variational Autoencoders from Adversarial Attacks with MCMC [74.36233246536459]
Variational autoencoders (VAEs) are deep generative models used in various domains.
As previous work has shown, one can easily fool VAEs to produce unexpected latent representations and reconstructions for a visually slightly modified input.
Here, we examine several objective functions for adversarial attacks construction, suggest metrics assess the model robustness, and propose a solution.
arXiv Detail & Related papers (2022-03-18T13:25:18Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z) - Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects
of Discrete Input Encoding and Non-Linear Activations [9.092733355328251]
Spiking Neural Network (SNN) is a potential candidate for inherent robustness against adversarial attacks.
In this work, we demonstrate that adversarial accuracy of SNNs under gradient-based attacks is higher than their non-spiking counterparts.
arXiv Detail & Related papers (2020-03-23T17:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.