T-MLA: A Targeted Multiscale Log--Exponential Attack Framework for Neural Image Compression
- URL: http://arxiv.org/abs/2511.01079v1
- Date: Sun, 02 Nov 2025 21:06:33 GMT
- Title: T-MLA: A Targeted Multiscale Log--Exponential Attack Framework for Neural Image Compression
- Authors: Nikolay I. Kalmykov, Razan Dibo, Kaiyu Shen, Xu Zhonghan, Anh-Huy Phan, Yipeng Liu, Ivan Oseledets,
- Abstract summary: We propose a more advanced class of vulnerabilities by introducing T-MLA, the first targeted multiscale log--exponential attack framework.<n>Our approach crafts adversarial perturbations in the wavelet domain by directly targeting the quality of the attacked and reconstructed images.<n>Our findings reveal a critical security flaw at the core of generative and content delivery pipelines.
- Score: 6.189705043887372
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural image compression (NIC) has become the state-of-the-art for rate-distortion performance, yet its security vulnerabilities remain significantly less understood than those of classifiers. Existing adversarial attacks on NICs are often naive adaptations of pixel-space methods, overlooking the unique, structured nature of the compression pipeline. In this work, we propose a more advanced class of vulnerabilities by introducing T-MLA, the first targeted multiscale log--exponential attack framework. Our approach crafts adversarial perturbations in the wavelet domain by directly targeting the quality of the attacked and reconstructed images. This allows for a principled, offline attack where perturbations are strategically confined to specific wavelet subbands, maximizing distortion while ensuring perceptual stealth. Extensive evaluation across multiple state-of-the-art NIC architectures on standard image compression benchmarks reveals a large drop in reconstruction quality while the perturbations remain visually imperceptible. Our findings reveal a critical security flaw at the core of generative and content delivery pipelines.
Related papers
- Less Is More -- Until It Breaks: Security Pitfalls of Vision Token Compression in Large Vision-Language Models [69.84867664371826]
We show that visual token compression substantially degrades the robustness of Large Vision-Language Models (LVLMs)<n>Small and imperceptible perturbations can significantly alter token importance ranking, leading the compression mechanism to mistakenly discard task-critical information.<n>We propose a Compression-Aware Attack to systematically study and exploit this vulnerability.
arXiv Detail & Related papers (2026-01-17T13:02:41Z) - Trans-defense: Transformer-based Denoiser for Adversarial Defense with Spatial-Frequency Domain Representation [11.290034765506816]
Deep neural networks (DNNs) are vulnerable to adversarial attacks, restricting their applications in security-critical systems.<n>We present two-phase training methods to tackle the attack: first, training the denoising network, and second, the deep classifier model.<n>We propose a novel denoising strategy that integrates both spatial and frequency domain approaches to defend against adversarial attacks on images.
arXiv Detail & Related papers (2025-10-31T07:29:50Z) - Deeply-Conditioned Image Compression via Self-Generated Priors [75.29511865838812]
We introduce a framework predicated on functional decomposition, which we term Deeply-Conditioned Image Compression via self-generated priors (DCIC-sgp)<n>Our framework achieves significant BD-rate reductions of 14.4%, 15.7%, and 15.1% against the VVC test model VTM-12.1 on the Kodak, CLIC, and Tecnick datasets.
arXiv Detail & Related papers (2025-10-28T14:04:19Z) - Active Adversarial Noise Suppression for Image Forgery Localization [56.98050814363447]
We introduce an Adversarial Noise Suppression Module (ANSM) that generate a defensive perturbation to suppress the attack effect of adversarial noise.<n>To our best knowledge, this is the first report of adversarial defense in image forgery localization tasks.
arXiv Detail & Related papers (2025-06-15T14:53:27Z) - Bitstream Collisions in Neural Image Compression via Adversarial Perturbations [2.0960189135529212]
This study reveals an unexpected vulnerability in NIC - bitstream collisions.<n>The collision vulnerability poses a threat to the practical usability of NIC, particularly in security-critical applications.<n>A simple yet effective mitigation method is presented.
arXiv Detail & Related papers (2025-03-25T16:29:17Z) - Semantic Ensemble Loss and Latent Refinement for High-Fidelity Neural Image Compression [58.618625678054826]
This study presents an enhanced neural compression method designed for optimal visual fidelity.
We have trained our model with a sophisticated semantic ensemble loss, integrating Charbonnier loss, perceptual loss, style loss, and a non-binary adversarial loss.
Our empirical findings demonstrate that this approach significantly improves the statistical fidelity of neural image compression.
arXiv Detail & Related papers (2024-01-25T08:11:27Z) - Transferable Learned Image Compression-Resistant Adversarial Perturbations [66.46470251521947]
Adversarial attacks can readily disrupt the image classification system, revealing the vulnerability of DNN-based recognition tasks.
We introduce a new pipeline that targets image classification models that utilize learned image compressors as pre-processing modules.
arXiv Detail & Related papers (2024-01-06T03:03:28Z) - Reconstruction Distortion of Learned Image Compression with
Imperceptible Perturbations [69.25683256447044]
We introduce an attack approach designed to effectively degrade the reconstruction quality of Learned Image Compression (LIC)
We generate adversarial examples by introducing a Frobenius norm-based loss function to maximize the discrepancy between original images and reconstructed adversarial examples.
Experiments conducted on the Kodak dataset using various LIC models demonstrate effectiveness.
arXiv Detail & Related papers (2023-06-01T20:21:05Z) - Defending Adversarial Examples via DNN Bottleneck Reinforcement [20.08619981108837]
This paper presents a reinforcement scheme to alleviate the vulnerability of Deep Neural Networks (DNN) against adversarial attacks.
By reinforcing the former while maintaining the latter, any redundant information, be it adversarial or not, should be removed from the latent representation.
In order to reinforce the information bottleneck, we introduce the multi-scale low-pass objective and multi-scale high-frequency communication for better frequency steering in the network.
arXiv Detail & Related papers (2020-08-12T11:02:01Z) - TensorShield: Tensor-based Defense Against Adversarial Attacks on Images [7.080154188969453]
Recent studies have demonstrated that machine learning approaches like deep neural networks (DNNs) are easily fooled by adversarial attacks.
In this paper, we utilize tensor decomposition techniques as a preprocessing step to find a low-rank approximation of images which can significantly discard high-frequency perturbations.
arXiv Detail & Related papers (2020-02-18T00:39:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.