Minimum Noticeable Difference based Adversarial Privacy Preserving Image
Generation
- URL: http://arxiv.org/abs/2206.08638v1
- Date: Fri, 17 Jun 2022 09:02:12 GMT
- Title: Minimum Noticeable Difference based Adversarial Privacy Preserving Image
Generation
- Authors: Wen Sun, Jian Jin, and Weisi Lin
- Abstract summary: We develop a framework to generate adversarial privacy preserving images that have minimum perceptual difference from the clean ones but are able to attack deep learning models.
To the best of our knowledge, this is the first work on exploring quality-preserving adversarial image generation based on the MND concept for privacy preserving.
- Score: 44.2692621807947
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning models are found to be vulnerable to adversarial examples, as
wrong predictions can be caused by small perturbation in input for deep
learning models. Most of the existing works of adversarial image generation try
to achieve attacks for most models, while few of them make efforts on
guaranteeing the perceptual quality of the adversarial examples. High quality
adversarial examples matter for many applications, especially for the privacy
preserving. In this work, we develop a framework based on the Minimum
Noticeable Difference (MND) concept to generate adversarial privacy preserving
images that have minimum perceptual difference from the clean ones but are able
to attack deep learning models. To achieve this, an adversarial loss is firstly
proposed to make the deep learning models attacked by the adversarial images
successfully. Then, a perceptual quality-preserving loss is developed by taking
the magnitude of perturbation and perturbation-caused structural and gradient
changes into account, which aims to preserve high perceptual quality for
adversarial image generation. To the best of our knowledge, this is the first
work on exploring quality-preserving adversarial image generation based on the
MND concept for privacy preserving. To evaluate its performance in terms of
perceptual quality, the deep models on image classification and face
recognition are tested with the proposed method and several anchor methods in
this work. Extensive experimental results demonstrate that the proposed MND
framework is capable of generating adversarial images with remarkably improved
performance metrics (e.g., PSNR, SSIM, and MOS) than that generated with the
anchor methods.
Related papers
- Rethinking and Defending Protective Perturbation in Personalized Diffusion Models [21.30373461975769]
We study the fine-tuning process of personalized diffusion models (PDMs) through the lens of shortcut learning.
PDMs are susceptible to minor adversarial perturbations, leading to significant degradation when fine-tuned on corrupted datasets.
We propose a systematic defense framework that includes data purification and contrastive decoupling learning.
arXiv Detail & Related papers (2024-06-27T07:14:14Z) - Tailoring Adversarial Attacks on Deep Neural Networks for Targeted Class Manipulation Using DeepFool Algorithm [6.515472477685614]
The susceptibility of deep neural networks (DNNs) to adversarial attacks undermines their reliability across numerous applications.
We introduce the Enhanced Targeted DeepFool (ET DeepFool) algorithm, an evolution of DeepFool.
Our empirical investigations demonstrate the superiority of this refined approach in maintaining the integrity of images.
arXiv Detail & Related papers (2023-10-18T18:50:39Z) - Counterfactual Image Generation for adversarially robust and
interpretable Classifiers [1.3859669037499769]
We propose a unified framework leveraging image-to-image translation Generative Adrial Networks (GANs) to produce counterfactual samples.
This is achieved by combining the classifier and discriminator into a single model that attributes real images to their respective classes and flags generated images as "fake"
We show how the model exhibits improved robustness to adversarial attacks, and we show how the discriminator's "fakeness" value serves as an uncertainty measure of the predictions.
arXiv Detail & Related papers (2023-10-01T18:50:29Z) - MIRST-DM: Multi-Instance RST with Drop-Max Layer for Robust
Classification of Breast Cancer [62.997667081978825]
We propose the Multi-instance RST with a drop-max layer, namely MIRST-DM, to learn smoother decision boundaries on small datasets.
The proposed approach was validated using a small breast ultrasound dataset with 1,190 images.
arXiv Detail & Related papers (2022-05-02T20:25:26Z) - Deep Bayesian Image Set Classification: A Defence Approach against
Adversarial Attacks [32.48820298978333]
Deep neural networks (DNNs) are susceptible to be fooled with nearly high confidence by an adversary.
In practice, the vulnerability of deep learning systems against carefully perturbed images, known as adversarial examples, poses a dire security threat in the physical world applications.
We propose a robust deep Bayesian image set classification as a defence framework against a broad range of adversarial attacks.
arXiv Detail & Related papers (2021-08-23T14:52:44Z) - Deep Image Destruction: A Comprehensive Study on Vulnerability of Deep
Image-to-Image Models against Adversarial Attacks [104.8737334237993]
We present comprehensive investigations into the vulnerability of deep image-to-image models to adversarial attacks.
For five popular image-to-image tasks, 16 deep models are analyzed from various standpoints.
We show that unlike in image classification tasks, the performance degradation on image-to-image tasks can largely differ depending on various factors.
arXiv Detail & Related papers (2021-04-30T14:20:33Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z) - Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp
Adversarial Attacks [154.31827097264264]
Adversarial training is a popular defense strategy against attack threat models with bounded Lp norms.
We propose Dual Manifold Adversarial Training (DMAT) where adversarial perturbations in both latent and image spaces are used in robustifying the model.
Our DMAT improves performance on normal images, and achieves comparable robustness to the standard adversarial training against Lp attacks.
arXiv Detail & Related papers (2020-09-05T06:00:28Z) - Face Anti-Spoofing Via Disentangled Representation Learning [90.90512800361742]
Face anti-spoofing is crucial to security of face recognition systems.
We propose a novel perspective of face anti-spoofing that disentangles the liveness features and content features from images.
arXiv Detail & Related papers (2020-08-19T03:54:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.