A Privacy-Preserving Walk in the Latent Space of Generative Models for
Medical Applications
- URL: http://arxiv.org/abs/2307.02984v1
- Date: Thu, 6 Jul 2023 13:35:48 GMT
- Title: A Privacy-Preserving Walk in the Latent Space of Generative Models for
Medical Applications
- Authors: Matteo Pennisi, Federica Proietto Salanitri, Giovanni Bellitto, Simone
Palazzo, Ulas Bagci, Concetto Spampinato
- Abstract summary: Generative Adversarial Networks (GANs) have demonstrated their ability to generate synthetic samples that match a target distribution.
GANs tend to embed near-duplicates of real samples in the latent space.
We propose a latent space navigation strategy able to generate diverse synthetic samples that may support effective training of deep models.
- Score: 11.39717289910264
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative Adversarial Networks (GANs) have demonstrated their ability to
generate synthetic samples that match a target distribution. However, from a
privacy perspective, using GANs as a proxy for data sharing is not a safe
solution, as they tend to embed near-duplicates of real samples in the latent
space. Recent works, inspired by k-anonymity principles, address this issue
through sample aggregation in the latent space, with the drawback of reducing
the dataset by a factor of k. Our work aims to mitigate this problem by
proposing a latent space navigation strategy able to generate diverse synthetic
samples that may support effective training of deep models, while addressing
privacy concerns in a principled way. Our approach leverages an auxiliary
identity classifier as a guide to non-linearly walk between points in the
latent space, minimizing the risk of collision with near-duplicates of real
samples. We empirically demonstrate that, given any random pair of points in
the latent space, our walking strategy is safer than linear interpolation. We
then test our path-finding strategy combined to k-same methods and demonstrate,
on two benchmarks for tuberculosis and diabetic retinopathy classification,
that training a model using samples generated by our approach mitigate drops in
performance, while keeping privacy preservation.
Related papers
- Mitigating Feature Gap for Adversarial Robustness by Feature
Disentanglement [61.048842737581865]
Adversarial fine-tuning methods aim to enhance adversarial robustness through fine-tuning the naturally pre-trained model in an adversarial training manner.
We propose a disentanglement-based approach to explicitly model and remove the latent features that cause the feature gap.
Empirical evaluations on three benchmark datasets demonstrate that our approach surpasses existing adversarial fine-tuning methods and adversarial training baselines.
arXiv Detail & Related papers (2024-01-26T08:38:57Z) - Robustness Against Adversarial Attacks via Learning Confined Adversarial
Polytopes [0.0]
Deep neural networks (DNNs) could be deceived by generating human-imperceptible perturbations of clean samples.
In this paper, we aim to train robust DNNs by limiting the set of outputs reachable via a norm-bounded perturbation added to a clean sample.
arXiv Detail & Related papers (2024-01-15T22:31:15Z) - Diffusion-Based Adversarial Sample Generation for Improved Stealthiness
and Controllability [62.105715985563656]
We propose a novel framework dubbed Diffusion-Based Projected Gradient Descent (Diff-PGD) for generating realistic adversarial samples.
Our framework can be easily customized for specific tasks such as digital attacks, physical-world attacks, and style-based attacks.
arXiv Detail & Related papers (2023-05-25T21:51:23Z) - Adaptive Face Recognition Using Adversarial Information Network [57.29464116557734]
Face recognition models often degenerate when training data are different from testing data.
We propose a novel adversarial information network (AIN) to address it.
arXiv Detail & Related papers (2023-05-23T02:14:11Z) - StyleGenes: Discrete and Efficient Latent Distributions for GANs [149.0290830305808]
We propose a discrete latent distribution for Generative Adversarial Networks (GANs)
Instead of drawing latent vectors from a continuous prior, we sample from a finite set of learnable latents.
We take inspiration from the encoding of information in biological organisms.
arXiv Detail & Related papers (2023-04-30T23:28:46Z) - Beyond Empirical Risk Minimization: Local Structure Preserving
Regularization for Improving Adversarial Robustness [28.853413482357634]
Local Structure Preserving (LSP) regularization aims to preserve the local structure of the input space in the learned embedding space.
In this work, we propose a novel Local Structure Preserving (LSP) regularization, which aims to preserve the local structure of the input space in the learned embedding space.
arXiv Detail & Related papers (2023-03-29T17:18:58Z) - On the Privacy Properties of GAN-generated Samples [12.765060550622422]
We show that GAN-generated samples inherently satisfy some (weak) privacy guarantees.
We also study the robustness of GAN-generated samples to membership inference attacks.
arXiv Detail & Related papers (2022-06-03T00:29:35Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - GELATO: Geometrically Enriched Latent Model for Offline Reinforcement
Learning [54.291331971813364]
offline reinforcement learning approaches can be divided into proximal and uncertainty-aware methods.
In this work, we demonstrate the benefit of combining the two in a latent variational model.
Our proposed metrics measure both the quality of out of distribution samples as well as the discrepancy of examples in the data.
arXiv Detail & Related papers (2021-02-22T19:42:40Z) - Generating Out of Distribution Adversarial Attack using Latent Space
Poisoning [5.1314136039587925]
We propose a novel mechanism of generating adversarial examples where the actual image is not corrupted.
latent space representation is utilized to tamper with the inherent structure of the image.
As opposed to gradient-based attacks, the latent space poisoning exploits the inclination of classifiers to model the independent and identical distribution of the training dataset.
arXiv Detail & Related papers (2020-12-09T13:05:44Z) - Regularization with Latent Space Virtual Adversarial Training [4.874780144224057]
Virtual Adversarial Training (VAT) has shown impressive results among recently developed regularization methods.
We propose LVAT, which injects perturbation in the latent space instead of the input space.
LVAT can generate adversarial samples flexibly, resulting in more adverse effects and thus more effective regularization.
arXiv Detail & Related papers (2020-11-26T08:51:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.