Adversarial Purification and Fine-tuning for Robust UDC Image
Restoration
- URL: http://arxiv.org/abs/2402.13629v1
- Date: Wed, 21 Feb 2024 09:06:04 GMT
- Title: Adversarial Purification and Fine-tuning for Robust UDC Image
Restoration
- Authors: Zhenbo Song, Zhenyuan Zhang, Kaihao Zhang, Wenhan Luo, Zhaoxin Fan,
Jianfeng Lu
- Abstract summary: Under-Display Camera (UDC) technology faces unique image degradation challenges exacerbated by the susceptibility to adversarial perturbations.
This study focuses on the enhancement of Under-Display Camera (UDC) image restoration models, focusing on their robustness against adversarial attacks.
- Score: 41.64534231708787
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study delves into the enhancement of Under-Display Camera (UDC) image
restoration models, focusing on their robustness against adversarial attacks.
Despite its innovative approach to seamless display integration, UDC technology
faces unique image degradation challenges exacerbated by the susceptibility to
adversarial perturbations. Our research initially conducts an in-depth
robustness evaluation of deep-learning-based UDC image restoration models by
employing several white-box and black-box attacking methods. This evaluation is
pivotal in understanding the vulnerabilities of current UDC image restoration
techniques. Following the assessment, we introduce a defense framework
integrating adversarial purification with subsequent fine-tuning processes.
First, our approach employs diffusion-based adversarial purification,
effectively neutralizing adversarial perturbations. Then, we apply the
fine-tuning methodologies to refine the image restoration models further,
ensuring that the quality and fidelity of the restored images are maintained.
The effectiveness of our proposed approach is validated through extensive
experiments, showing marked improvements in resilience against typical
adversarial attacks.
Related papers
- Analysis of Deep Image Prior and Exploiting Self-Guidance for Image
Reconstruction [13.277067849874756]
We study how DIP recovers information from undersampled imaging measurements.
We introduce a self-driven reconstruction process that concurrently optimize both the network weights and the input.
Our method incorporates a novel denoiser regularization term which enables robust and stable joint estimation of both the network input and reconstructed image.
arXiv Detail & Related papers (2024-02-06T15:52:23Z) - Dual Adversarial Resilience for Collaborating Robust Underwater Image
Enhancement and Perception [54.672052775549]
In this work, we introduce a collaborative adversarial resilience network, dubbed CARNet, for underwater image enhancement and subsequent detection tasks.
We propose a synchronized attack training strategy with both visual-driven and perception-driven attacks enabling the network to discern and remove various types of attacks.
Experiments demonstrate that the proposed method outputs visually appealing enhancement images and perform averagely 6.71% higher detection mAP than state-of-the-art methods.
arXiv Detail & Related papers (2023-09-03T06:52:05Z) - Diffusion Models for Image Restoration and Enhancement -- A
Comprehensive Survey [96.99328714941657]
We present a comprehensive review of recent diffusion model-based methods on image restoration.
We classify and emphasize the innovative designs using diffusion models for both IR and blind/real-world IR.
We propose five potential and challenging directions for the future research of diffusion model-based IR.
arXiv Detail & Related papers (2023-08-18T08:40:38Z) - Reconstruction Distortion of Learned Image Compression with
Imperceptible Perturbations [69.25683256447044]
We introduce an attack approach designed to effectively degrade the reconstruction quality of Learned Image Compression (LIC)
We generate adversarial examples by introducing a Frobenius norm-based loss function to maximize the discrepancy between original images and reconstructed adversarial examples.
Experiments conducted on the Kodak dataset using various LIC models demonstrate effectiveness.
arXiv Detail & Related papers (2023-06-01T20:21:05Z) - DiracDiffusion: Denoising and Incremental Reconstruction with Assured
Data-Consistency [32.2120650813129]
Diffusion models have established new state of the art in a multitude of computer vision tasks, including image restoration.
We propose a novel framework for inverse problem solving, namely we assume that the observation comes from a degradation process that gradually degrades and noises the original clean image.
Our technique maintains consistency with the original measurement throughout the reverse process, and allows for great flexibility in trading off perceptual quality for improved distortion metrics and sampling speedup via early-stopping.
arXiv Detail & Related papers (2023-03-25T04:37:20Z) - Robust Single Image Dehazing Based on Consistent and Contrast-Assisted
Reconstruction [95.5735805072852]
We propose a novel density-variational learning framework to improve the robustness of the image dehzing model.
Specifically, the dehazing network is optimized under the consistency-regularized framework.
Our method significantly surpasses the state-of-the-art approaches.
arXiv Detail & Related papers (2022-03-29T08:11:04Z) - Delving into Deep Image Prior for Adversarial Defense: A Novel
Reconstruction-based Defense Framework [34.75025893777763]
This work proposes a novel and effective reconstruction-based defense framework by delving into deep image prior.
The proposed method analyzes and explicitly incorporates the model decision process into our defense.
Experiments demonstrate that the proposed method outperforms existing state-of-the-art reconstruction-based methods both in defending white-box attacks and defense-aware attacks.
arXiv Detail & Related papers (2021-07-31T08:49:17Z) - Improving White-box Robustness of Pre-processing Defenses via Joint Adversarial Training [106.34722726264522]
A range of adversarial defense techniques have been proposed to mitigate the interference of adversarial noise.
Pre-processing methods may suffer from the robustness degradation effect.
A potential cause of this negative effect is that adversarial training examples are static and independent to the pre-processing model.
We propose a method called Joint Adversarial Training based Pre-processing (JATP) defense.
arXiv Detail & Related papers (2021-06-10T01:45:32Z) - SAD: Saliency-based Defenses Against Adversarial Examples [0.9786690381850356]
adversarial examples drift model predictions away from the original intent of the network.
In this work, we propose a visual saliency based approach to cleaning data affected by an adversarial attack.
arXiv Detail & Related papers (2020-03-10T15:55:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.