Deep Bayesian Image Set Classification: A Defence Approach against
Adversarial Attacks
- URL: http://arxiv.org/abs/2108.10217v1
- Date: Mon, 23 Aug 2021 14:52:44 GMT
- Title: Deep Bayesian Image Set Classification: A Defence Approach against
Adversarial Attacks
- Authors: Nima Mirnateghi, Syed Afaq Ali Shah, Mohammed Bennamoun
- Abstract summary: Deep neural networks (DNNs) are susceptible to be fooled with nearly high confidence by an adversary.
In practice, the vulnerability of deep learning systems against carefully perturbed images, known as adversarial examples, poses a dire security threat in the physical world applications.
We propose a robust deep Bayesian image set classification as a defence framework against a broad range of adversarial attacks.
- Score: 32.48820298978333
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning has become an integral part of various computer vision systems
in recent years due to its outstanding achievements for object recognition,
facial recognition, and scene understanding. However, deep neural networks
(DNNs) are susceptible to be fooled with nearly high confidence by an
adversary. In practice, the vulnerability of deep learning systems against
carefully perturbed images, known as adversarial examples, poses a dire
security threat in the physical world applications. To address this phenomenon,
we present, what to our knowledge, is the first ever image set based
adversarial defence approach. Image set classification has shown an exceptional
performance for object and face recognition, owing to its intrinsic property of
handling appearance variability. We propose a robust deep Bayesian image set
classification as a defence framework against a broad range of adversarial
attacks. We extensively experiment the performance of the proposed technique
with several voting strategies. We further analyse the effects of image size,
perturbation magnitude, along with the ratio of perturbed images in each image
set. We also evaluate our technique with the recent state-of-the-art defence
methods, and single-shot recognition task. The empirical results demonstrate
superior performance on CIFAR-10, MNIST, ETH-80, and Tiny ImageNet datasets.
Related papers
- Dual Adversarial Resilience for Collaborating Robust Underwater Image
Enhancement and Perception [54.672052775549]
In this work, we introduce a collaborative adversarial resilience network, dubbed CARNet, for underwater image enhancement and subsequent detection tasks.
We propose a synchronized attack training strategy with both visual-driven and perception-driven attacks enabling the network to discern and remove various types of attacks.
Experiments demonstrate that the proposed method outputs visually appealing enhancement images and perform averagely 6.71% higher detection mAP than state-of-the-art methods.
arXiv Detail & Related papers (2023-09-03T06:52:05Z) - On Adversarial Robustness of Deep Image Deblurring [15.66170693813815]
This paper introduces adversarial attacks against deep learning-based image deblurring methods.
We demonstrate that imperceptible distortion can significantly degrade the performance of state-of-the-art deblurring networks.
arXiv Detail & Related papers (2022-10-05T18:31:33Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [55.38113802311365]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - A Black-Box Attack on Optical Character Recognition Systems [0.0]
Adversarial machine learning is an emerging area showing the vulnerability of deep learning models.
In this paper, we propose a simple yet efficient attack method, Efficient Combinatorial Black-box Adversarial Attack, on binary image classifiers.
We validate the efficiency of the attack technique on two different data sets and three classification networks, demonstrating its performance.
arXiv Detail & Related papers (2022-08-30T14:36:27Z) - Minimum Noticeable Difference based Adversarial Privacy Preserving Image
Generation [44.2692621807947]
We develop a framework to generate adversarial privacy preserving images that have minimum perceptual difference from the clean ones but are able to attack deep learning models.
To the best of our knowledge, this is the first work on exploring quality-preserving adversarial image generation based on the MND concept for privacy preserving.
arXiv Detail & Related papers (2022-06-17T09:02:12Z) - Deep Image Destruction: A Comprehensive Study on Vulnerability of Deep
Image-to-Image Models against Adversarial Attacks [104.8737334237993]
We present comprehensive investigations into the vulnerability of deep image-to-image models to adversarial attacks.
For five popular image-to-image tasks, 16 deep models are analyzed from various standpoints.
We show that unlike in image classification tasks, the performance degradation on image-to-image tasks can largely differ depending on various factors.
arXiv Detail & Related papers (2021-04-30T14:20:33Z) - FACESEC: A Fine-grained Robustness Evaluation Framework for Face
Recognition Systems [49.577302852655144]
FACESEC is a framework for fine-grained robustness evaluation of face recognition systems.
We study five face recognition systems in both closed-set and open-set settings.
We find that accurate knowledge of neural architecture is significantly more important than knowledge of the training data in black-box attacks.
arXiv Detail & Related papers (2021-04-08T23:00:25Z) - Adversarial Examples Detection beyond Image Space [88.7651422751216]
We find that there exists compliance between perturbations and prediction confidence, which guides us to detect few-perturbation attacks from the aspect of prediction confidence.
We propose a method beyond image space by a two-stream architecture, in which the image stream focuses on the pixel artifacts and the gradient stream copes with the confidence artifacts.
arXiv Detail & Related papers (2021-02-23T09:55:03Z) - Face Anti-Spoofing Via Disentangled Representation Learning [90.90512800361742]
Face anti-spoofing is crucial to security of face recognition systems.
We propose a novel perspective of face anti-spoofing that disentangles the liveness features and content features from images.
arXiv Detail & Related papers (2020-08-19T03:54:23Z) - Adversarial Attacks on Convolutional Neural Networks in Facial
Recognition Domain [2.4704085162861693]
Adversarial attacks that render Deep Neural Network (DNN) classifiers vulnerable in real life represent a serious threat in autonomous vehicles, malware filters, or biometric authentication systems.
We apply Fast Gradient Sign Method to introduce perturbations to a facial image dataset and then test the output on a different classifier.
We craft a variety of different black-box attack algorithms on a facial image dataset assuming minimal adversarial knowledge.
arXiv Detail & Related papers (2020-01-30T00:25:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.