CorrGAN: Input Transformation Technique Against Natural Corruptions
- URL: http://arxiv.org/abs/2204.08623v1
- Date: Tue, 19 Apr 2022 02:56:46 GMT
- Title: CorrGAN: Input Transformation Technique Against Natural Corruptions
- Authors: Mirazul Haque, Christof J. Budnik, and Wei Yang
- Abstract summary: In this work, we propose CorrGAN approach, which can generate benign input when a corrupted input is provided.
In this framework, we train Generative Adversarial Network (GAN) with novel intermediate output-based loss function.
The GAN can denoise the corrupted input and generate benign input.
- Score: 4.479638789566316
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Because of the increasing accuracy of Deep Neural Networks (DNNs) on
different tasks, a lot of real times systems are utilizing DNNs. These DNNs are
vulnerable to adversarial perturbations and corruptions. Specifically, natural
corruptions like fog, blur, contrast etc can affect the prediction of DNN in an
autonomous vehicle. In real time, these corruptions are needed to be detected
and also the corrupted inputs are needed to be de-noised to be predicted
correctly. In this work, we propose CorrGAN approach, which can generate benign
input when a corrupted input is provided. In this framework, we train
Generative Adversarial Network (GAN) with novel intermediate output-based loss
function. The GAN can denoise the corrupted input and generate benign input.
Through experimentation, we show that up to 75.2% of the corrupted
misclassified inputs can be classified correctly by DNN using CorrGAN.
Related papers
- DARDA: Domain-Aware Real-Time Dynamic Neural Network Adaptation [8.339630468077713]
Test Time Adaptation (TTA) has emerged as a practical solution to mitigate the performance degradation of Deep Neural Networks (DNNs) in the presence of corruption/ noise affecting inputs.
We propose Domain-Aware Real-Time Dynamic Adaptation (DARDA) to address such issues.
arXiv Detail & Related papers (2024-09-15T14:49:30Z) - Model Copyright Protection in Buyer-seller Environment [35.2914055333853]
We propose a novel copyright protection scheme for a deep neural network (DNN) using an input-sensitive neural network (ISNN)
During the training phase, we add a specific perturbation to the clean images and mark them as legal inputs, while the other inputs are treated as illegal input.
Experimental results demonstrate that the proposed scheme is effective, valid, and secure.
arXiv Detail & Related papers (2023-12-05T07:15:10Z) - ELEGANT: Certified Defense on the Fairness of Graph Neural Networks [94.10433608311604]
Graph Neural Networks (GNNs) have emerged as a prominent graph learning model in various graph-based tasks.
malicious attackers could easily corrupt the fairness level of their predictions by adding perturbations to the input graph data.
We propose a principled framework named ELEGANT to study a novel problem of certifiable defense on the fairness level of GNNs.
arXiv Detail & Related papers (2023-11-05T20:29:40Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Verification-Aided Deep Ensemble Selection [4.290931412096984]
Deep neural networks (DNNs) have become the technology of choice for realizing a variety of complex tasks.
Even an imperceptible perturbation to a correctly classified input can lead to misclassification by a DNN.
This paper devises a methodology for identifying ensemble compositions that are less prone to simultaneous errors.
arXiv Detail & Related papers (2022-02-08T14:36:29Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - Enhancing Graph Neural Network-based Fraud Detectors against Camouflaged
Fraudsters [78.53851936180348]
We introduce two types of camouflages based on recent empirical studies, i.e., the feature camouflage and the relation camouflage.
Existing GNNs have not addressed these two camouflages, which results in their poor performance in fraud detection problems.
We propose a new model named CAmouflage-REsistant GNN (CARE-GNN) to enhance the GNN aggregation process with three unique modules against camouflages.
arXiv Detail & Related papers (2020-08-19T22:33:12Z) - CodNN -- Robust Neural Networks From Coded Classification [27.38642191854458]
Deep Neural Networks (DNNs) are a revolutionary force in the ongoing information revolution.
DNNs are highly sensitive to noise, whether adversarial or random.
This poses a fundamental challenge for hardware implementations of DNNs, and for their deployment in critical applications such as autonomous driving.
By our approach, either the data or internal layers of the DNN are coded with error correcting codes, and successful computation under noise is guaranteed.
arXiv Detail & Related papers (2020-04-22T17:07:15Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z) - Bayesian x-vector: Bayesian Neural Network based x-vector System for
Speaker Verification [71.45033077934723]
We incorporate Bayesian neural networks (BNNs) into the deep neural network (DNN) x-vector speaker verification system.
With the weight uncertainty modeling provided by BNNs, we expect the system could generalize better on the evaluation data.
Results show that the system could benefit from BNNs by a relative EER decrease of 2.66% and 2.32% respectively for short- and long-utterance in-domain evaluations.
arXiv Detail & Related papers (2020-04-08T14:35:12Z) - A Low-cost Fault Corrector for Deep Neural Networks through Range
Restriction [1.8907108368038215]
Deep neural networks (DNNs) in safety-critical domains have engendered serious reliability concerns.
This work proposes Ranger, a low-cost fault corrector, which directly rectifies the faulty output due to transient faults without re-computation.
arXiv Detail & Related papers (2020-03-30T23:53:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.