BadRefSR: Backdoor Attacks Against Reference-based Image Super Resolution
- URL: http://arxiv.org/abs/2502.20943v1
- Date: Fri, 28 Feb 2025 10:53:39 GMT
- Title: BadRefSR: Backdoor Attacks Against Reference-based Image Super Resolution
- Authors: Xue Yang, Tao Chen, Lei Guo, Wenbo Jiang, Ji Guo, Yongming Li, Jiaming He,
- Abstract summary: RefSR leverage an additional reference image to help recover high-frequency details.<n>BadRefSR embeds backdoors in the RefSR model by adding triggers to the reference images and training with a mixed loss function.<n>Our study aims to alert researchers to the potential backdoor risks in RefSR.
- Score: 14.605562676764636
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reference-based image super-resolution (RefSR) represents a promising advancement in super-resolution (SR). In contrast to single-image super-resolution (SISR), RefSR leverages an additional reference image to help recover high-frequency details, yet its vulnerability to backdoor attacks has not been explored. To fill this research gap, we propose a novel attack framework called BadRefSR, which embeds backdoors in the RefSR model by adding triggers to the reference images and training with a mixed loss function. Extensive experiments across various backdoor attack settings demonstrate the effectiveness of BadRefSR. The compromised RefSR network performs normally on clean input images, while outputting attacker-specified target images on triggered input images. Our study aims to alert researchers to the potential backdoor risks in RefSR. Codes are available at https://github.com/xuefusiji/BadRefSR.
Related papers
- PromptRR: Diffusion Models as Prompt Generators for Single Image
Reflection Removal [138.38229287266915]
Existing single image reflection removal (SIRR) methods tend to miss key low-frequency (LF) and high-frequency (HF) differences in images.
This paper proposes a novel prompt-guided reflection removal framework that uses frequency information as new visual prompts for better reflection performance.
arXiv Detail & Related papers (2024-02-04T07:11:10Z) - RBSR: Efficient and Flexible Recurrent Network for Burst
Super-Resolution [57.98314517861539]
Burst super-resolution (BurstSR) aims at reconstructing a high-resolution (HR) image from a sequence of low-resolution (LR) and noisy images.
In this paper, we suggest fusing cues frame-by-frame with an efficient and flexible recurrent network.
arXiv Detail & Related papers (2023-06-30T12:14:13Z) - A Feature Reuse Framework with Texture-adaptive Aggregation for
Reference-based Super-Resolution [29.57364804554312]
Reference-based super-resolution (RefSR) has gained considerable success in the field of super-resolution.
We propose a feature reuse framework that guides the step-by-step texture reconstruction process.
We introduce a single image feature embedding module and a texture-adaptive aggregation module.
arXiv Detail & Related papers (2023-06-02T12:49:22Z) - RRSR:Reciprocal Reference-based Image Super-Resolution with Progressive
Feature Alignment and Selection [66.08293086254851]
We propose a reciprocal learning framework to reinforce the learning of a RefSR network.
The newly proposed module aligns reference-input images at multi-scale feature spaces and performs reference-aware feature selection.
We empirically show that multiple recent state-of-the-art RefSR models can be consistently improved with our reciprocal learning paradigm.
arXiv Detail & Related papers (2022-11-08T12:39:35Z) - RZSR: Reference-based Zero-Shot Super-Resolution with Depth Guided
Self-Exemplars [20.771851470271777]
Single image super-resolution (SISR) has demonstrated outstanding performance in generating high-resolution (HR) images from low-resolution (LR) images.
We propose an integrated solution, called reference-based zero-shot SR (RZSR)
arXiv Detail & Related papers (2022-08-24T05:48:17Z) - Privacy Safe Representation Learning via Frequency Filtering Encoder [7.792424517008007]
Adversarial Representation Learning (ARL) is a common approach to train an encoder that runs on the client-side and obfuscates an image.
It is assumed, that the obfuscated image can safely be transmitted and used for the task on the server without privacy concerns.
We introduce a novel ARL method enhanced through low-pass filtering, limiting the available information amount to be encoded in the frequency domain.
arXiv Detail & Related papers (2022-08-04T06:16:13Z) - Reference-based Image Super-Resolution with Deformable Attention
Transformer [62.71769634254654]
RefSR aims to exploit auxiliary reference (Ref) images to super-resolve low-resolution (LR) images.
This paper proposes a deformable attention Transformer, namely DATSR, with multiple scales.
Experiments demonstrate that our DATSR achieves state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2022-07-25T07:07:00Z) - MASA-SR: Matching Acceleration and Spatial Adaptation for
Reference-Based Image Super-Resolution [74.24676600271253]
We propose the MASA network for RefSR, where two novel modules are designed to address these problems.
The proposed Match & Extraction Module significantly reduces the computational cost by a coarse-to-fine correspondence matching scheme.
The Spatial Adaptation Module learns the difference of distribution between the LR and Ref images, and remaps the distribution of Ref features to that of LR features in a spatially adaptive way.
arXiv Detail & Related papers (2021-06-04T07:15:32Z) - Cross-Scale Internal Graph Neural Network for Image Super-Resolution [147.77050877373674]
Non-local self-similarity in natural images has been well studied as an effective prior in image restoration.
For single image super-resolution (SISR), most existing deep non-local methods only exploit similar patches within the same scale of the low-resolution (LR) input image.
This is achieved using a novel cross-scale internal graph neural network (IGNN)
arXiv Detail & Related papers (2020-06-30T10:48:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.