Exploring adversarial robustness of JPEG AI: methodology, comparison and new methods
- URL: http://arxiv.org/abs/2411.11795v1
- Date: Mon, 18 Nov 2024 18:08:52 GMT
- Title: Exploring adversarial robustness of JPEG AI: methodology, comparison and new methods
- Authors: Egor Kovalev, Georgii Bychkov, Khaled Abud, Aleksandr Gushchin, Anna Chistyakova, Sergey Lavrushkin, Dmitriy Vatolin, Anastasia Antsiferova,
- Abstract summary: JPEG AI is the first standard for end-to-end neural image compression methods.
This paper presents the first large-scale evaluation of JPEG AI's robustness.
- Score: 37.328379777238304
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial robustness of neural networks is an increasingly important area of research, combining studies on computer vision models, large language models (LLMs), and others. With the release of JPEG AI - the first standard for end-to-end neural image compression (NIC) methods - the question of its robustness has become critically significant. JPEG AI is among the first international, real-world applications of neural-network-based models to be embedded in consumer devices. However, research on NIC robustness has been limited to open-source codecs and a narrow range of attacks. This paper proposes a new methodology for measuring NIC robustness to adversarial attacks. We present the first large-scale evaluation of JPEG AI's robustness, comparing it with other NIC models. Our evaluation results and code are publicly available online (link is hidden for a blind review).
Related papers
- NIC-RobustBench: A Comprehensive Open-Source Toolkit for Neural Image Compression and Robustness Analysis [39.50289486200944]
JPEG AI is the first standard for end-to-end neural image compression (NIC) methods.<n>Previous research has been limited to a narrow range of codecs and attacks.<n>We present textbfNIC-RobustBench, the first open-source framework to evaluate NIC robustness and adversarial defenses' efficiency.
arXiv Detail & Related papers (2025-06-23T19:11:15Z) - CO-SPY: Combining Semantic and Pixel Features to Detect Synthetic Images by AI [58.35348718345307]
Current efforts to distinguish between real and AI-generated images may lack generalization.
We propose a novel framework, Co-Spy, that first enhances existing semantic features.
We also create Co-Spy-Bench, a comprehensive dataset comprising 5 real image datasets and 22 state-of-the-art generative models.
arXiv Detail & Related papers (2025-03-24T01:59:29Z) - Is JPEG AI going to change image forensics? [50.92778618091496]
We investigate the counter-forensic effects of the new JPEG AI standard based on neural image compression.
Our results demonstrate a reduction in the performance of leading forensic detectors when analyzing content processed through JPEG AI.
arXiv Detail & Related papers (2024-12-04T12:07:20Z) - Exploring Cross-model Neuronal Correlations in the Context of Predicting Model Performance and Generalizability [2.6708879445664584]
This paper introduces a novel approach for assessing a newly trained model's performance based on another known model.
The proposed method evaluates correlations by determining if, for each neuron in one network, there exists a neuron in the other network that produces similar output.
arXiv Detail & Related papers (2024-08-15T22:57:39Z) - NNsight and NDIF: Democratizing Access to Open-Weight Foundation Model Internals [58.83169560132308]
We introduce NNsight and NDIF, technologies that work in tandem to enable scientific study of the representations and computations learned by very large neural networks.
arXiv Detail & Related papers (2024-07-18T17:59:01Z) - Unsupervised Denoising of Optical Coherence Tomography Images with
Dual_Merged CycleWGAN [3.3909577600092122]
We propose a new Cycle-Consistent Generative Adversarial Nets called Dual-Merged Cycle-WGAN for retinal OCT image denoiseing.
Our model consists of two Cycle-GAN networks with imporved generator, descriminator and wasserstein loss to achieve good training stability and better performance.
arXiv Detail & Related papers (2022-05-02T07:38:19Z) - An Empirical Analysis of Recurrent Learning Algorithms In Neural Lossy
Image Compression Systems [73.48927855855219]
Recent advances in deep learning have resulted in image compression algorithms that outperform JPEG and JPEG 2000 on the standard Kodak benchmark.
In this paper, we perform the first large-scale comparison of recent state-of-the-art hybrid neural compression algorithms.
arXiv Detail & Related papers (2022-01-27T19:47:51Z) - Towards Robust Neural Image Compression: Adversarial Attack and Model
Finetuning [30.36695754075178]
Deep neural network-based image compression has been extensively studied.
We propose to examine the robustness of prevailing learned image compression models by injecting negligible adversarial perturbation into the original source image.
A variety of defense strategies including geometric self-ensemble based pre-processing, and adversarial training, are investigated against the adversarial attack to improve the model's robustness.
arXiv Detail & Related papers (2021-12-16T08:28:26Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - Wide & Deep neural network model for patch aggregation in CNN-based
prostate cancer detection systems [51.19354417900591]
Prostate cancer (PCa) is one of the leading causes of death among men, with almost 1.41 million new cases and around 375,000 deaths in 2020.
To perform an automatic diagnosis, prostate tissue samples are first digitized into gigapixel-resolution whole-slide images.
Small subimages called patches are extracted and predicted, obtaining a patch-level classification.
arXiv Detail & Related papers (2021-05-20T18:13:58Z) - On the Adversarial Robustness of Quantized Neural Networks [2.0625936401496237]
It is unclear how model compression techniques may affect the robustness of AI algorithms against adversarial attacks.
This paper explores the effect of quantization, one of the most common compression techniques, on the adversarial robustness of neural networks.
arXiv Detail & Related papers (2021-05-01T11:46:35Z) - An Uncertainty-Driven GCN Refinement Strategy for Organ Segmentation [53.425900196763756]
We propose a segmentation refinement method based on uncertainty analysis and graph convolutional networks.
We employ the uncertainty levels of the convolutional network in a particular input volume to formulate a semi-supervised graph learning problem.
We show that our method outperforms the state-of-the-art CRF refinement method by improving the dice score by 1% for the pancreas and 2% for spleen.
arXiv Detail & Related papers (2020-12-06T18:55:07Z) - Perceptually Optimizing Deep Image Compression [53.705543593594285]
Mean squared error (MSE) and $ell_p$ norms have largely dominated the measurement of loss in neural networks.
We propose a different proxy approach to optimize image analysis networks against quantitative perceptual models.
arXiv Detail & Related papers (2020-07-03T14:33:28Z) - A cognitive based Intrusion detection system [0.0]
Intrusion detection is one of the important mechanisms that provide computer networks security.
This paper proposes a new approach based on Deep Neural Network ans Support vector machine classifier.
The proposed model predicts the attacks with better accuracy for intrusion detection rather similar methods.
arXiv Detail & Related papers (2020-05-19T13:30:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.