Towards Robust Neural Image Compression: Adversarial Attack and Model
Finetuning
- URL: http://arxiv.org/abs/2112.08691v3
- Date: Thu, 8 Jun 2023 09:04:56 GMT
- Title: Towards Robust Neural Image Compression: Adversarial Attack and Model
Finetuning
- Authors: Tong Chen and Zhan Ma
- Abstract summary: Deep neural network-based image compression has been extensively studied.
We propose to examine the robustness of prevailing learned image compression models by injecting negligible adversarial perturbation into the original source image.
A variety of defense strategies including geometric self-ensemble based pre-processing, and adversarial training, are investigated against the adversarial attack to improve the model's robustness.
- Score: 30.36695754075178
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural network-based image compression has been extensively studied.
However, the model robustness which is crucial to practical application is
largely overlooked. We propose to examine the robustness of prevailing learned
image compression models by injecting negligible adversarial perturbation into
the original source image. Severe distortion in decoded reconstruction reveals
the general vulnerability in existing methods regardless of their settings
(e.g., network architecture, loss function, quality scale). A variety of
defense strategies including geometric self-ensemble based pre-processing, and
adversarial training, are investigated against the adversarial attack to
improve the model's robustness. Later the defense efficiency is further
exemplified in real-life image recompression case studies. Overall, our
methodology is simple, effective, and generalizable, making it attractive for
developing robust learned image compression solutions. All materials are made
publicly accessible at https://njuvision.github.io/RobustNIC for reproducible
research.
Related papers
- A Training-Free Defense Framework for Robust Learned Image Compression [48.41990144764295]
We study the robustness of learned image compression models against adversarial attacks.
We present a training-free defense technique based on simple image transform functions.
arXiv Detail & Related papers (2024-01-22T12:50:21Z) - Transferable Learned Image Compression-Resistant Adversarial Perturbations [66.46470251521947]
Adversarial attacks can readily disrupt the image classification system, revealing the vulnerability of DNN-based recognition tasks.
We introduce a new pipeline that targets image classification models that utilize learned image compressors as pre-processing modules.
arXiv Detail & Related papers (2024-01-06T03:03:28Z) - Machine Perception-Driven Image Compression: A Layered Generative
Approach [32.23554195427311]
layered generative image compression model is proposed to achieve high human vision-oriented image reconstructed quality.
Task-agnostic learning-based compression model is proposed, which effectively supports various compressed domain-based analytical tasks.
Joint optimization schedule is adopted to acquire best balance point among compression ratio, reconstructed image quality, and downstream perception performance.
arXiv Detail & Related papers (2023-04-14T02:12:38Z) - Backdoor Attacks Against Deep Image Compression via Adaptive Frequency
Trigger [106.10954454667757]
We present a novel backdoor attack with multiple triggers against learned image compression models.
Motivated by the widely used discrete cosine transform (DCT) in existing compression systems and standards, we propose a frequency-based trigger injection model.
arXiv Detail & Related papers (2023-02-28T15:39:31Z) - Estimating the Resize Parameter in End-to-end Learned Image Compression [50.20567320015102]
We describe a search-free resizing framework that can further improve the rate-distortion tradeoff of recent learned image compression models.
Our results show that our new resizing parameter estimation framework can provide Bjontegaard-Delta rate (BD-rate) improvement of about 10% against leading perceptual quality engines.
arXiv Detail & Related papers (2022-04-26T01:35:02Z) - The Devil Is in the Details: Window-based Attention for Image
Compression [58.1577742463617]
Most existing learned image compression models are based on Convolutional Neural Networks (CNNs)
In this paper, we study the effects of multiple kinds of attention mechanisms for local features learning, then introduce a more straightforward yet effective window-based local attention block.
The proposed window-based attention is very flexible which could work as a plug-and-play component to enhance CNN and Transformer models.
arXiv Detail & Related papers (2022-03-16T07:55:49Z) - Applying Tensor Decomposition to image for Robustness against
Adversarial Attack [3.347059384111439]
It can easily fool the deep learning model by adding small perturbations.
In this paper, we suggest combining tensor decomposition for defending the model against adversarial example.
arXiv Detail & Related papers (2020-02-28T18:30:22Z) - Learning End-to-End Lossy Image Compression: A Benchmark [90.35363142246806]
We first conduct a comprehensive literature survey of learned image compression methods.
We describe milestones in cutting-edge learned image-compression methods, review a broad range of existing works, and provide insights into their historical development routes.
By introducing a coarse-to-fine hyperprior model for entropy estimation and signal reconstruction, we achieve improved rate-distortion performance.
arXiv Detail & Related papers (2020-02-10T13:13:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.