Countering Adversarial Examples: Combining Input Transformation and
Noisy Training
- URL: http://arxiv.org/abs/2106.13394v1
- Date: Fri, 25 Jun 2021 02:46:52 GMT
- Title: Countering Adversarial Examples: Combining Input Transformation and
Noisy Training
- Authors: Cheng Zhang, Pan Gao
- Abstract summary: adversarial examples pose a threat to security-sensitive image recognition task.
Traditional JPEG compression is insufficient to defend those attacks but can cause an abrupt accuracy decline to benign images.
We make modifications to traditional JPEG compression algorithm which becomes more favorable for NN.
- Score: 15.561916630351947
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent studies have shown that neural network (NN) based image classifiers
are highly vulnerable to adversarial examples, which poses a threat to
security-sensitive image recognition task. Prior work has shown that JPEG
compression can combat the drop in classification accuracy on adversarial
examples to some extent. But, as the compression ratio increases, traditional
JPEG compression is insufficient to defend those attacks but can cause an
abrupt accuracy decline to the benign images. In this paper, with the aim of
fully filtering the adversarial perturbations, we firstly make modifications to
traditional JPEG compression algorithm which becomes more favorable for NN.
Specifically, based on an analysis of the frequency coefficient, we design a
NN-favored quantization table for compression. Considering compression as a
data augmentation strategy, we then combine our model-agnostic preprocess with
noisy training. We fine-tune the pre-trained model by training with images
encoded at different compression levels, thus generating multiple classifiers.
Finally, since lower (higher) compression ratio can remove both perturbations
and original features slightly (aggressively), we use these trained multiple
models for model ensemble. The majority vote of the ensemble of models is
adopted as final predictions. Experiments results show our method can improve
defense efficiency while maintaining original accuracy.
Related papers
- A Training-Free Defense Framework for Robust Learned Image Compression [48.41990144764295]
We study the robustness of learned image compression models against adversarial attacks.
We present a training-free defense technique based on simple image transform functions.
arXiv Detail & Related papers (2024-01-22T12:50:21Z) - Transferable Learned Image Compression-Resistant Adversarial Perturbations [66.46470251521947]
Adversarial attacks can readily disrupt the image classification system, revealing the vulnerability of DNN-based recognition tasks.
We introduce a new pipeline that targets image classification models that utilize learned image compressors as pre-processing modules.
arXiv Detail & Related papers (2024-01-06T03:03:28Z) - Backdoor Attacks Against Deep Image Compression via Adaptive Frequency
Trigger [106.10954454667757]
We present a novel backdoor attack with multiple triggers against learned image compression models.
Motivated by the widely used discrete cosine transform (DCT) in existing compression systems and standards, we propose a frequency-based trigger injection model.
arXiv Detail & Related papers (2023-02-28T15:39:31Z) - Crowd Counting on Heavily Compressed Images with Curriculum Pre-Training [90.76576712433595]
Applying lossy compression on images processed by deep neural networks can lead to significant accuracy degradation.
Inspired by the curriculum learning paradigm, we present a novel training approach called curriculum pre-training (CPT) for crowd counting on compressed images.
arXiv Detail & Related papers (2022-08-15T08:43:21Z) - Estimating the Resize Parameter in End-to-end Learned Image Compression [50.20567320015102]
We describe a search-free resizing framework that can further improve the rate-distortion tradeoff of recent learned image compression models.
Our results show that our new resizing parameter estimation framework can provide Bjontegaard-Delta rate (BD-rate) improvement of about 10% against leading perceptual quality engines.
arXiv Detail & Related papers (2022-04-26T01:35:02Z) - Post-Training Quantization for Cross-Platform Learned Image Compression [15.67527732099067]
It has been witnessed that learned image compression has outperformed conventional image coding techniques.
One of the most critical issues that need to be considered is the non-deterministic calculation.
We propose to solve this problem by introducing well-developed post-training quantization.
arXiv Detail & Related papers (2022-02-15T15:41:12Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.