FACM: Intermediate Layer Still Retain Effective Features against
Adversarial Examples
- URL: http://arxiv.org/abs/2206.00924v2
- Date: Sun, 2 Apr 2023 09:59:34 GMT
- Title: FACM: Intermediate Layer Still Retain Effective Features against
Adversarial Examples
- Authors: Xiangyuan Yang, Jie Lin, Hanlin Zhang, Xinyu Yang, Peng Zhao
- Abstract summary: In strong adversarial attacks against deep neural networks (DNN), the generated adversarial example will mislead the DNN-implemented classifier.
We propose a textbfFeature textbfAnalysis and textbfConditional textbfMatching textbfPrediction textbfDistribution (CMPD) correction module and decision module.
Our model can be achieved by fine-tuning and can be combined with other model-specific defenses.
- Score: 18.880398046794138
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In strong adversarial attacks against deep neural networks (DNN), the
generated adversarial example will mislead the DNN-implemented classifier by
destroying the output features of the last layer. To enhance the robustness of
the classifier, in our paper, a \textbf{F}eature \textbf{A}nalysis and
\textbf{C}onditional \textbf{M}atching prediction distribution (FACM) model is
proposed to utilize the features of intermediate layers to correct the
classification. Specifically, we first prove that the intermediate layers of
the classifier can still retain effective features for the original category,
which is defined as the correction property in our paper. According to this, we
propose the FACM model consisting of \textbf{F}eature \textbf{A}nalysis (FA)
correction module, \textbf{C}onditional \textbf{M}atching \textbf{P}rediction
\textbf{D}istribution (CMPD) correction module and decision module. The FA
correction module is the fully connected layers constructed with the output of
the intermediate layers as the input to correct the classification of the
classifier. The CMPD correction module is a conditional auto-encoder, which can
not only use the output of intermediate layers as the condition to accelerate
convergence but also mitigate the negative effect of adversarial example
training with the Kullback-Leibler loss to match prediction distribution.
Through the empirically verified diversity property, the correction modules can
be implemented synergistically to reduce the adversarial subspace. Hence, the
decision module is proposed to integrate the correction modules to enhance the
DNN classifier's robustness. Specially, our model can be achieved by
fine-tuning and can be combined with other model-specific defenses.
Related papers
- Rectified Diffusion Guidance for Conditional Generation [62.00207951161297]
We revisit the theory behind CFG and rigorously confirm that the improper configuration of the combination coefficients (i.e., the widely used summing-to-one version) brings about expectation shift of the generative distribution.
We propose ReCFG with a relaxation on the guidance coefficients such that denoising with ReCFG strictly aligns with the diffusion theory.
That way the rectified coefficients can be readily pre-computed via traversing the observed data, leaving the sampling speed barely affected.
arXiv Detail & Related papers (2024-10-24T13:41:32Z) - Steering Masked Discrete Diffusion Models via Discrete Denoising Posterior Prediction [88.65168366064061]
We introduce Discrete Denoising Posterior Prediction (DDPP), a novel framework that casts the task of steering pre-trained MDMs as a problem of probabilistic inference.
Our framework leads to a family of three novel objectives that are all simulation-free, and thus scalable.
We substantiate our designs via wet-lab validation, where we observe transient expression of reward-optimized protein sequences.
arXiv Detail & Related papers (2024-10-10T17:18:30Z) - Memory-guided Network with Uncertainty-based Feature Augmentation for Few-shot Semantic Segmentation [12.653336728447654]
We propose a class-shared memory (CSM) module consisting of a set of learnable memory vectors.
These memory vectors learn elemental object patterns from base classes during training whilst re-encoding query features during both training and inference.
We integrate CSM and UFA into representative FSS works, with experimental results on the widely-used PASCAL-5$i$ and COCO-20$i$ datasets.
arXiv Detail & Related papers (2024-06-01T19:53:25Z) - Adaptive Computation Modules: Granular Conditional Computation For Efficient Inference [12.371152982808914]
We introduce the Adaptive Computation Module (ACM), a generic module that dynamically adapts its computational load to match the estimated difficulty of the input on a per-token basis.
An ACM consists of a sequence of learners that progressively refine the output of their preceding counterparts. An additional gating mechanism determines the optimal number of learners to execute for each token.
Our evaluation of transformer models in computer vision and speech recognition demonstrates that substituting layers with ACMs significantly reduces inference costs without degrading the downstream accuracy for a wide interval of user-defined budgets.
arXiv Detail & Related papers (2023-12-15T20:39:43Z) - Robust Class-Conditional Distribution Alignment for Partial Domain
Adaptation [0.7892577704654171]
Unwanted samples from private source categories in the learning objective of a partial domain adaptation setup can lead to negative transfer and reduce classification performance.
Existing methods, such as re-weighting or aggregating target predictions, are vulnerable to this issue.
Our proposed approach seeks to overcome these limitations by delving deeper than just the first-order moments to derive distinct and compact categorical distributions.
arXiv Detail & Related papers (2023-10-18T15:49:46Z) - Semi-Supervised Domain Adaptation with Auto-Encoder via Simultaneous
Learning [18.601226898819476]
We present a new semi-supervised domain adaptation framework that combines a novel auto-encoder-based domain adaptation model with a simultaneous learning scheme.
Our framework holds strong distribution matching property by training both source and target auto-encoders.
arXiv Detail & Related papers (2022-10-18T00:10:11Z) - Meta-Causal Feature Learning for Out-of-Distribution Generalization [71.38239243414091]
This paper presents a balanced meta-causal learner (BMCL), which includes a balanced task generation module (BTG) and a meta-causal feature learning module (MCFL)
BMCL effectively identifies the class-invariant visual regions for classification and may serve as a general framework to improve the performance of the state-of-the-art methods.
arXiv Detail & Related papers (2022-08-22T09:07:02Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z) - Contradictory Structure Learning for Semi-supervised Domain Adaptation [67.89665267469053]
Current adversarial adaptation methods attempt to align the cross-domain features.
Two challenges remain unsolved: 1) the conditional distribution mismatch and 2) the bias of the decision boundary towards the source domain.
We propose a novel framework for semi-supervised domain adaptation by unifying the learning of opposite structures.
arXiv Detail & Related papers (2020-02-06T22:58:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.