Reconstruction-Based Membership Inference Attacks are Easier on
Difficult Problems
- URL: http://arxiv.org/abs/2102.07762v1
- Date: Mon, 15 Feb 2021 18:57:22 GMT
- Title: Reconstruction-Based Membership Inference Attacks are Easier on
Difficult Problems
- Authors: Avital Shafran, Shmuel Peleg, Yedid Hoshen
- Abstract summary: We show that models with higher dimensional input and output are more vulnerable to membership inference attacks.
We propose using a novel predictability score that can be computed for each sample, and its computation does not require a training set.
Our membership error, obtained by subtracting the predictability score from the reconstruction error, is shown to achieve high MIA accuracy on an extensive number of benchmarks.
- Score: 36.13835940345486
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Membership inference attacks (MIA) try to detect if data samples were used to
train a neural network model, e.g. to detect copyright abuses. We show that
models with higher dimensional input and output are more vulnerable to MIA, and
address in more detail models for image translation and semantic segmentation.
We show that reconstruction-errors can lead to very effective MIA attacks as
they are indicative of memorization. Unfortunately, reconstruction error alone
is less effective at discriminating between non-predictable images used in
training and easy to predict images that were never seen before. To overcome
this, we propose using a novel predictability score that can be computed for
each sample, and its computation does not require a training set. Our
membership error, obtained by subtracting the predictability score from the
reconstruction error, is shown to achieve high MIA accuracy on an extensive
number of benchmarks.
Related papers
- Is Difficulty Calibration All We Need? Towards More Practical Membership Inference Attacks [16.064233621959538]
We propose a query-efficient and computation-efficient MIA that directly textbfRe-levertextbfAges the original membershitextbfP scores to mtextbfItigate the errors in textbfDifficulty calibration.
arXiv Detail & Related papers (2024-08-31T11:59:42Z) - Adversarial Robustification via Text-to-Image Diffusion Models [56.37291240867549]
Adrial robustness has been conventionally believed as a challenging property to encode for neural networks.
We develop a scalable and model-agnostic solution to achieve adversarial robustness without using any data.
arXiv Detail & Related papers (2024-07-26T10:49:14Z) - Membership Inference Attacks on Diffusion Models via Quantile Regression [30.30033625685376]
We demonstrate a privacy vulnerability of diffusion models through amembership inference (MI) attack.
Our proposed MI attack learns quantile regression models that predict (a quantile of) the distribution of reconstruction loss on examples not used in training.
We show that our attack outperforms the prior state-of-the-art attack while being substantially less computationally expensive.
arXiv Detail & Related papers (2023-12-08T16:21:24Z) - RelaxLoss: Defending Membership Inference Attacks without Losing Utility [68.48117818874155]
We propose a novel training framework based on a relaxed loss with a more achievable learning target.
RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead.
Our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs.
arXiv Detail & Related papers (2022-07-12T19:34:47Z) - Reconstructing Training Data with Informed Adversaries [30.138217209991826]
Given access to a machine learning model, can an adversary reconstruct the model's training data?
This work studies this question from the lens of a powerful informed adversary who knows all the training data points except one.
We show it is feasible to reconstruct the remaining data point in this stringent threat model.
arXiv Detail & Related papers (2022-01-13T09:19:25Z) - Divide-and-Assemble: Learning Block-wise Memory for Unsupervised Anomaly
Detection [40.778313918994996]
Reconstruction-based methods play an important role in unsupervised anomaly detection in images.
In this work, we interpret the reconstruction of an image as a divide-and-assemble procedure.
We achieve state-of-the-art performance on the challenging MVTec AD dataset.
arXiv Detail & Related papers (2021-07-28T01:14:32Z) - CutPaste: Self-Supervised Learning for Anomaly Detection and
Localization [59.719925639875036]
We propose a framework for building anomaly detectors using normal training data only.
We first learn self-supervised deep representations and then build a generative one-class classifier on learned representations.
Our empirical study on MVTec anomaly detection dataset demonstrates the proposed algorithm is general to be able to detect various types of real-world defects.
arXiv Detail & Related papers (2021-04-08T19:04:55Z) - Adversarial Examples for Unsupervised Machine Learning Models [71.81480647638529]
Adrial examples causing evasive predictions are widely used to evaluate and improve the robustness of machine learning models.
We propose a framework of generating adversarial examples for unsupervised models and demonstrate novel applications to data augmentation.
arXiv Detail & Related papers (2021-03-02T17:47:58Z) - Salvage Reusable Samples from Noisy Data for Robust Learning [70.48919625304]
We propose a reusable sample selection and correction approach, termed as CRSSC, for coping with label noise in training deep FG models with web images.
Our key idea is to additionally identify and correct reusable samples, and then leverage them together with clean examples to update the networks.
arXiv Detail & Related papers (2020-08-06T02:07:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.