LeNo: Adversarial Robust Salient Object Detection Networks with
Learnable Noise
- URL: http://arxiv.org/abs/2210.15392v1
- Date: Thu, 27 Oct 2022 12:52:55 GMT
- Title: LeNo: Adversarial Robust Salient Object Detection Networks with
Learnable Noise
- Authors: He Tang and He Wang
- Abstract summary: This paper proposes a light-weight Learnble Noise (LeNo) to against adversarial attacks for SOD models.
LeNo preserves accuracy of SOD models on both adversarial and clean images, as well as inference speed.
Inspired by the center prior of human visual attention mechanism, we initialize the shallow noise with a cross-shaped gaussian distribution for better defense against adversarial attacks.
- Score: 7.794351961083746
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pixel-wise predction with deep neural network has become an effective
paradigm for salient object detection (SOD) and achieved remakable performance.
However, very few SOD models are robust against adversarial attacks which are
visually imperceptible for human visual attention. The previous work robust
salient object detection against adversarial attacks (ROSA) shuffles the
pre-segmented superpixels and then refines the coarse saliency map by the
densely connected CRF. Different from ROSA that rely on various pre- and
post-processings, this paper proposes a light-weight Learnble Noise (LeNo) to
against adversarial attacks for SOD models. LeNo preserves accuracy of SOD
models on both adversarial and clean images, as well as inference speed. In
general, LeNo consists of a simple shallow noise and noise estimation that
embedded in the encoder and decoder of arbitrary SOD networks respectively.
Inspired by the center prior of human visual attention mechanism, we initialize
the shallow noise with a cross-shaped gaussian distribution for better defense
against adversarial attacks. Instead of adding additional network components
for post-processing, the proposed noise estimation modifies only one channel of
the decoder. With the deeply-supervised noise-decoupled training on
state-of-the-art RGB and RGB-D SOD networks, LeNo outperforms previous works
not only on adversarial images but also clean images, which contributes
stronger robustness for SOD.
Related papers
- Enhanced Wavelet Scattering Network for image inpainting detection [0.0]
This paper proposes several innovative ideas for detecting inpainting forgeries based on low level noise analysis.
It combines Dual-Tree Complex Wavelet Transform (DT-CWT) for feature extraction with convolutional neural networks (CNN) for forged area detection and localization.
Our approach was benchmarked against state-of-the-art methods and demonstrated superior performance over all cited alternatives.
arXiv Detail & Related papers (2024-09-25T15:27:05Z) - Defending Spiking Neural Networks against Adversarial Attacks through Image Purification [20.492531851480784]
Spiking Neural Networks (SNNs) aim to bridge the gap between neuroscience and machine learning.
SNNs are vulnerable to adversarial attacks like convolutional neural networks.
We propose a biologically inspired methodology to enhance the robustness of SNNs.
arXiv Detail & Related papers (2024-04-26T00:57:06Z) - SIRST-5K: Exploring Massive Negatives Synthesis with Self-supervised
Learning for Robust Infrared Small Target Detection [53.19618419772467]
Single-frame infrared small target (SIRST) detection aims to recognize small targets from clutter backgrounds.
With the development of Transformer, the scale of SIRST models is constantly increasing.
With a rich diversity of infrared small target data, our algorithm significantly improves the model performance and convergence speed.
arXiv Detail & Related papers (2024-03-08T16:14:54Z) - Evaluating Similitude and Robustness of Deep Image Denoising Models via
Adversarial Attack [60.40356882897116]
Deep neural networks (DNNs) have shown superior performance compared to traditional image denoising algorithms.
In this paper, we propose an adversarial attack method named denoising-PGD which can successfully attack all the current deep denoising models.
arXiv Detail & Related papers (2023-06-28T09:30:59Z) - Robust Real-World Image Super-Resolution against Adversarial Attacks [115.04009271192211]
adversarial image samples with quasi-imperceptible noises could threaten deep learning SR models.
We propose a robust deep learning framework for real-world SR that randomly erases potential adversarial noises.
Our proposed method is more insensitive to adversarial attacks and presents more stable SR results than existing models and defenses.
arXiv Detail & Related papers (2022-07-31T13:26:33Z) - Guided Diffusion Model for Adversarial Purification [103.4596751105955]
Adversarial attacks disturb deep neural networks (DNNs) in various algorithms and frameworks.
We propose a novel purification approach, referred to as guided diffusion model for purification (GDMP)
On our comprehensive experiments across various datasets, the proposed GDMP is shown to reduce the perturbations raised by adversarial attacks to a shallow range.
arXiv Detail & Related papers (2022-05-30T10:11:15Z) - Towards Adversarially Robust Deep Image Denoising [199.2458715635285]
This work systematically investigates the adversarial robustness of deep image denoisers (DIDs)
We propose a novel adversarial attack, namely Observation-based Zero-mean Attack (sc ObsAtk) to craft adversarial zero-mean perturbations on given noisy images.
To robustify DIDs, we propose hybrid adversarial training (sc HAT) that jointly trains DIDs with adversarial and non-adversarial noisy data.
arXiv Detail & Related papers (2022-01-12T10:23:14Z) - Dual Spoof Disentanglement Generation for Face Anti-spoofing with Depth
Uncertainty Learning [54.15303628138665]
Face anti-spoofing (FAS) plays a vital role in preventing face recognition systems from presentation attacks.
Existing face anti-spoofing datasets lack diversity due to the insufficient identity and insignificant variance.
We propose Dual Spoof Disentanglement Generation framework to tackle this challenge by "anti-spoofing via generation"
arXiv Detail & Related papers (2021-12-01T15:36:59Z) - New SAR target recognition based on YOLO and very deep multi-canonical
correlation analysis [0.1503974529275767]
This paper proposes a robust feature extraction method for SAR image target classification by adaptively fusing effective features from different CNN layers.
Experiments on the MSTAR dataset demonstrate that the proposed method outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2021-10-28T18:10:26Z) - Object Detection based on OcSaFPN in Aerial Images with Noise [9.587619619262716]
A novel octave convolution-based semantic attention feature pyramid network (OcSaFPN) is proposed to get higher accuracy in object detection with noise.
The proposed algorithm tested on three datasets achieves a state-of-the-art detection performance with Gaussian noise or multiplicative noise.
arXiv Detail & Related papers (2020-12-18T01:28:51Z) - Adversarial Perturbations Prevail in the Y-Channel of the YCbCr Color
Space [43.49959098842923]
In a white-box attack, adversarial perturbations are generally learned for deep models that operate on RGB images.
In this paper, we show that the adversarial perturbations prevail in the Y-channel of the YCbCr space.
Based on our finding, we propose a defense against adversarial images.
arXiv Detail & Related papers (2020-02-25T02:41:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.