Human Aligned Compression for Robust Models
- URL: http://arxiv.org/abs/2504.12255v1
- Date: Wed, 16 Apr 2025 17:05:58 GMT
- Title: Human Aligned Compression for Robust Models
- Authors: Samuel Räber, Andreas Plesner, Till Aczel, Roger Wattenhofer,
- Abstract summary: Adversarial attacks on image models threaten system robustness by introducing imperceptible perturbations that cause incorrect predictions.<n>We investigate human-aligned learned lossy compression as a defense mechanism, comparing two learned models (HiFiC and ELIC) against traditional JPEG across various quality levels.
- Score: 18.95453617434051
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adversarial attacks on image models threaten system robustness by introducing imperceptible perturbations that cause incorrect predictions. We investigate human-aligned learned lossy compression as a defense mechanism, comparing two learned models (HiFiC and ELIC) against traditional JPEG across various quality levels. Our experiments on ImageNet subsets demonstrate that learned compression methods outperform JPEG, particularly for Vision Transformer architectures, by preserving semantically meaningful content while removing adversarial noise. Even in white-box settings where attackers can access the defense, these methods maintain substantial effectiveness. We also show that sequential compression--applying rounds of compression/decompression--significantly enhances defense efficacy while maintaining classification performance. Our findings reveal that human-aligned compression provides an effective, computationally efficient defense that protects the image features most relevant to human and machine understanding. It offers a practical approach to improving model robustness against adversarial threats.
Related papers
- Pathology Image Compression with Pre-trained Autoencoders [52.208181380986524]
Whole Slide Images in digital histopathology pose significant storage, transmission, and computational efficiency challenges.<n>Standard compression methods, such as JPEG, reduce file sizes but fail to preserve fine-grained phenotypic details critical for downstream tasks.<n>In this work, we repurpose autoencoders (AEs) designed for Latent Diffusion Models as an efficient learned compression framework for pathology images.
arXiv Detail & Related papers (2025-03-14T17:01:17Z) - Compression-Aware One-Step Diffusion Model for JPEG Artifact Removal [56.307484956135355]
CODiff is a compression-aware one-step diffusion model for JPEG artifact removal.
We propose a dual learning strategy that combines explicit and implicit learning.
Results demonstrate that CODiff surpasses recent leading methods in both quantitative and visual quality metrics.
arXiv Detail & Related papers (2025-02-14T02:46:27Z) - A Training-Free Defense Framework for Robust Learned Image Compression [48.41990144764295]
We study the robustness of learned image compression models against adversarial attacks.
We present a training-free defense technique based on simple image transform functions.
arXiv Detail & Related papers (2024-01-22T12:50:21Z) - Transferable Learned Image Compression-Resistant Adversarial Perturbations [66.46470251521947]
Adversarial attacks can readily disrupt the image classification system, revealing the vulnerability of DNN-based recognition tasks.
We introduce a new pipeline that targets image classification models that utilize learned image compressors as pre-processing modules.
arXiv Detail & Related papers (2024-01-06T03:03:28Z) - Backdoor Attacks Against Deep Image Compression via Adaptive Frequency
Trigger [106.10954454667757]
We present a novel backdoor attack with multiple triggers against learned image compression models.
Motivated by the widely used discrete cosine transform (DCT) in existing compression systems and standards, we propose a frequency-based trigger injection model.
arXiv Detail & Related papers (2023-02-28T15:39:31Z) - Cross Modal Compression: Towards Human-comprehensible Semantic
Compression [73.89616626853913]
Cross modal compression is a semantic compression framework for visual data.
We show that our proposed CMC can achieve encouraging reconstructed results with an ultrahigh compression ratio.
arXiv Detail & Related papers (2022-09-06T15:31:11Z) - Denial-of-Service Attacks on Learned Image Compression [18.84898685880023]
We investigate the robustness of image compression systems where imperceptible perturbations of input images can precipitate a significant increase in the perturbation of their compressed latent.
We propose a novel model which incorporates attention modules and a basic factorized entropy model, resulting in a promising trade-off between the PSNR/bpp ratio and robustness to adversarial attacks.
arXiv Detail & Related papers (2022-05-26T09:46:07Z) - Towards Robust Neural Image Compression: Adversarial Attack and Model
Finetuning [30.36695754075178]
Deep neural network-based image compression has been extensively studied.
We propose to examine the robustness of prevailing learned image compression models by injecting negligible adversarial perturbation into the original source image.
A variety of defense strategies including geometric self-ensemble based pre-processing, and adversarial training, are investigated against the adversarial attack to improve the model's robustness.
arXiv Detail & Related papers (2021-12-16T08:28:26Z) - Countering Adversarial Examples: Combining Input Transformation and
Noisy Training [15.561916630351947]
adversarial examples pose a threat to security-sensitive image recognition task.
Traditional JPEG compression is insufficient to defend those attacks but can cause an abrupt accuracy decline to benign images.
We make modifications to traditional JPEG compression algorithm which becomes more favorable for NN.
arXiv Detail & Related papers (2021-06-25T02:46:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.