DAMIA: Leveraging Domain Adaptation as a Defense against Membership
Inference Attacks
- URL: http://arxiv.org/abs/2005.08016v1
- Date: Sat, 16 May 2020 15:24:28 GMT
- Title: DAMIA: Leveraging Domain Adaptation as a Defense against Membership
Inference Attacks
- Authors: Hongwei Huang, Weiqi Luo, Guoqiang Zeng, Jian Weng, Yue Zhang, Anjia
Yang
- Abstract summary: We propose and implement DAMIA, leveraging Domain Adaptation (DA) as a defense aginist membership inference attacks.
Our observation is that DA obfuscates the dataset to be protected using another related dataset, and derives a model that underlyingly extracts the features from both datasets.
The model trained by DAMIA has a negligible footprint to the usability.
- Score: 22.10053473193636
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Learning (DL) techniques allow ones to train models from a dataset to
solve tasks. DL has attracted much interest given its fancy performance and
potential market value, while security issues are amongst the most colossal
concerns. However, the DL models may be prone to the membership inference
attack, where an attacker determines whether a given sample is from the
training dataset. Efforts have been made to hinder the attack but
unfortunately, they may lead to a major overhead or impaired usability. In this
paper, we propose and implement DAMIA, leveraging Domain Adaptation (DA) as a
defense aginist membership inference attacks. Our observation is that during
the training process, DA obfuscates the dataset to be protected using another
related dataset, and derives a model that underlyingly extracts the features
from both datasets. Seeing that the model is obfuscated, membership inference
fails, while the extracted features provide supports for usability. Extensive
experiments have been conducted to validates our intuition. The model trained
by DAMIA has a negligible footprint to the usability. Our experiment also
excludes factors that may hinder the performance of DAMIA, providing a
potential guideline to vendors and researchers to benefit from our solution in
a timely manner.
Related papers
- No Query, No Access [50.18709429731724]
We introduce the textbfVictim Data-based Adrial Attack (VDBA), which operates using only victim texts.<n>To prevent access to the victim model, we create a shadow dataset with publicly available pre-trained models and clustering methods.<n>Experiments on the Emotion and SST5 datasets show that VDBA outperforms state-of-the-art methods, achieving an ASR improvement of 52.08%.
arXiv Detail & Related papers (2025-05-12T06:19:59Z) - Parameter Matching Attack: Enhancing Practical Applicability of Availability Attacks [8.225819874406238]
We propose a novel availability approach termed Matching Attack (PMA)
PMA is the first availability attack that works when only a portion of data can be perturbed.
We show that PMA outperforms existing methods, achieving significant model performance degradation when a part of the training data is perturbed.
arXiv Detail & Related papers (2024-07-02T17:15:12Z) - Adaptive Domain Inference Attack with Concept Hierarchy [4.772368796656325]
Most known model-targeted attacks assume attackers have learned the application domain or training data distribution.
Can removing the domain information from model APIs protect models from these attacks?
We show that the proposed adaptive domain inference attack (ADI) can still successfully estimate relevant subsets of training data.
arXiv Detail & Related papers (2023-12-22T22:04:13Z) - DAD++: Improved Data-free Test Time Adversarial Defense [12.606555446261668]
We propose a test time Data-free Adversarial Defense (DAD) containing detection and correction frameworks.
We conduct a wide range of experiments and ablations on several datasets and network architectures to show the efficacy of our proposed approach.
Our DAD++ gives an impressive performance against various adversarial attacks with a minimal drop in clean accuracy.
arXiv Detail & Related papers (2023-09-10T20:39:53Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - Robust Transferable Feature Extractors: Learning to Defend Pre-Trained
Networks Against White Box Adversaries [69.53730499849023]
We show that adversarial examples can be successfully transferred to another independently trained model to induce prediction errors.
We propose a deep learning-based pre-processing mechanism, which we refer to as a robust transferable feature extractor (RTFE)
arXiv Detail & Related papers (2022-09-14T21:09:34Z) - RelaxLoss: Defending Membership Inference Attacks without Losing Utility [68.48117818874155]
We propose a novel training framework based on a relaxed loss with a more achievable learning target.
RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead.
Our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs.
arXiv Detail & Related papers (2022-07-12T19:34:47Z) - Explaining Adversarial Vulnerability with a Data Sparsity Hypothesis [0.0]
deep learning models are susceptible to adversarial attacks.
In this paper, we develop a training framework for DL models to learn such decision boundaries.
We measure adversarial robustness of the models trained using this training framework against well-known adversarial attacks.
arXiv Detail & Related papers (2021-03-01T06:04:31Z) - Curse or Redemption? How Data Heterogeneity Affects the Robustness of
Federated Learning [51.15273664903583]
Data heterogeneity has been identified as one of the key features in federated learning but often overlooked in the lens of robustness to adversarial attacks.
This paper focuses on characterizing and understanding its impact on backdooring attacks in federated learning through comprehensive experiments using synthetic and the LEAF benchmarks.
arXiv Detail & Related papers (2021-02-01T06:06:21Z) - Mitigating the Impact of Adversarial Attacks in Very Deep Networks [10.555822166916705]
Deep Neural Network (DNN) models have vulnerabilities related to security concerns.
Data poisoning-enabled perturbation attacks are complex adversarial ones that inject false data into models.
We propose an attack-agnostic-based defense method for mitigating their influence.
arXiv Detail & Related papers (2020-12-08T21:25:44Z) - Sampling Attacks: Amplification of Membership Inference Attacks by
Repeated Queries [74.59376038272661]
We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model.
We show that a victim model that only publishes the labels is still susceptible to sampling attacks and the adversary can recover up to 100% of its performance.
For defense, we choose differential privacy in the form of gradient perturbation during the training of the victim model as well as output perturbation at prediction time.
arXiv Detail & Related papers (2020-09-01T12:54:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.