Keep It Real: Challenges in Attacking Compression-Based Adversarial Purification
- URL: http://arxiv.org/abs/2508.05489v1
- Date: Thu, 07 Aug 2025 15:24:26 GMT
- Title: Keep It Real: Challenges in Attacking Compression-Based Adversarial Purification
- Authors: Samuel Räber, Till Aczel, Andreas Plesner, Roger Wattenhofer,
- Abstract summary: We identify a critical challenge for attackers: high realism in reconstructed images significantly increases attack difficulty.<n>We demonstrate that compression models capable of producing realistic, high-fidelity reconstructions are substantially more resistant to our attacks.<n>This work highlights a significant obstacle for future adversarial attacks and suggests that developing more effective techniques to overcome realism is an essential challenge for comprehensive security evaluation.
- Score: 18.95453617434051
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Previous work has suggested that preprocessing images through lossy compression can defend against adversarial perturbations, but comprehensive attack evaluations have been lacking. In this paper, we construct strong white-box and adaptive attacks against various compression models and identify a critical challenge for attackers: high realism in reconstructed images significantly increases attack difficulty. Through rigorous evaluation across multiple attack scenarios, we demonstrate that compression models capable of producing realistic, high-fidelity reconstructions are substantially more resistant to our attacks. In contrast, low-realism compression models can be broken. Our analysis reveals that this is not due to gradient masking. Rather, realistic reconstructions maintaining distributional alignment with natural images seem to offer inherent robustness. This work highlights a significant obstacle for future adversarial attacks and suggests that developing more effective techniques to overcome realism represents an essential challenge for comprehensive security evaluation.
Related papers
- Diffusion Attack: Leveraging Stable Diffusion for Naturalistic Image Attacking [6.761535322353205]
In Virtual Reality (VR), adversarial attack remains a significant security threat.
Most deep learning-based methods for physical and digital adversarial attacks focus on enhancing attack performance.
We propose a framework to incorporate style transfer to craft adversarial inputs of natural styles that exhibit minimal detectability and maximum natural appearance.
arXiv Detail & Related papers (2024-03-21T18:49:20Z) - Towards Robust Image Stitching: An Adaptive Resistance Learning against
Compatible Attacks [66.98297584796391]
Image stitching seamlessly integrates images captured from varying perspectives into a single wide field-of-view image.
Given a pair of captured images, subtle perturbations and distortions which go unnoticed by the human visual system tend to attack the correspondence matching.
This paper presents the first attempt to improve the robustness of image stitching against adversarial attacks.
arXiv Detail & Related papers (2024-02-25T02:36:33Z) - A Training-Free Defense Framework for Robust Learned Image Compression [48.41990144764295]
We study the robustness of learned image compression models against adversarial attacks.
We present a training-free defense technique based on simple image transform functions.
arXiv Detail & Related papers (2024-01-22T12:50:21Z) - IRAD: Implicit Representation-driven Image Resampling against Adversarial Attacks [16.577595936609665]
We introduce a novel approach to counter adversarial attacks, namely, image resampling.
Image resampling transforms a discrete image into a new one, simulating the process of scene recapturing or rerendering as specified by a geometrical transformation.
We show that our method significantly enhances the adversarial robustness of diverse deep models against various attacks while maintaining high accuracy on clean images.
arXiv Detail & Related papers (2023-10-18T11:19:32Z) - CARNet: Collaborative Adversarial Resilience for Robust Underwater Image Enhancement and Perception [16.135354859458758]
We introduce a collaborative adversarial resilience network, dubbed CARNet, for underwater image enhancement and subsequent detection tasks.<n>In this work, we first introduce an invertible network with strong-perceptual abilities to isolate attacks from underwater images.<n>We also propose a bilevel attack optimization strategy to heighten the robustness of the network against different types of attacks.
arXiv Detail & Related papers (2023-09-03T06:52:05Z) - Reconstruction Distortion of Learned Image Compression with
Imperceptible Perturbations [69.25683256447044]
We introduce an attack approach designed to effectively degrade the reconstruction quality of Learned Image Compression (LIC)
We generate adversarial examples by introducing a Frobenius norm-based loss function to maximize the discrepancy between original images and reconstructed adversarial examples.
Experiments conducted on the Kodak dataset using various LIC models demonstrate effectiveness.
arXiv Detail & Related papers (2023-06-01T20:21:05Z) - Content-based Unrestricted Adversarial Attack [53.181920529225906]
We propose a novel unrestricted attack framework called Content-based Unrestricted Adversarial Attack.
By leveraging a low-dimensional manifold that represents natural images, we map the images onto the manifold and optimize them along its adversarial direction.
arXiv Detail & Related papers (2023-05-18T02:57:43Z) - CARBEN: Composite Adversarial Robustness Benchmark [70.05004034081377]
This paper demonstrates how composite adversarial attack (CAA) affects the resulting image.
It provides real-time inferences of different models, which will facilitate users' configuration of the parameters of the attack level.
A leaderboard to benchmark adversarial robustness against CAA is also introduced.
arXiv Detail & Related papers (2022-07-16T01:08:44Z) - Towards Robust Neural Image Compression: Adversarial Attack and Model
Finetuning [30.36695754075178]
Deep neural network-based image compression has been extensively studied.
We propose to examine the robustness of prevailing learned image compression models by injecting negligible adversarial perturbation into the original source image.
A variety of defense strategies including geometric self-ensemble based pre-processing, and adversarial training, are investigated against the adversarial attack to improve the model's robustness.
arXiv Detail & Related papers (2021-12-16T08:28:26Z) - Adversarial Purification through Representation Disentanglement [21.862799765511976]
Deep learning models are vulnerable to adversarial examples and make incomprehensible mistakes.
Current defense methods, especially purification, tend to remove noise" by learning and recovering the natural images.
In this work, we propose a novel adversarial purification scheme by presenting disentanglement of natural images and adversarial perturbations as a preprocessing defense.
arXiv Detail & Related papers (2021-10-15T01:45:31Z) - Deep Image Destruction: A Comprehensive Study on Vulnerability of Deep
Image-to-Image Models against Adversarial Attacks [104.8737334237993]
We present comprehensive investigations into the vulnerability of deep image-to-image models to adversarial attacks.
For five popular image-to-image tasks, 16 deep models are analyzed from various standpoints.
We show that unlike in image classification tasks, the performance degradation on image-to-image tasks can largely differ depending on various factors.
arXiv Detail & Related papers (2021-04-30T14:20:33Z) - On Success and Simplicity: A Second Look at Transferable Targeted
Attacks [6.276791657895803]
We show that transferable targeted attacks converge slowly to optimal transferability and improve considerably when given more iterations.
An attack that simply maximizes the target logit performs surprisingly well, surpassing more complex losses and even achieving performance comparable to the state of the art.
arXiv Detail & Related papers (2020-12-21T09:41:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.