Deconstructing Classifiers: Towards A Data Reconstruction Attack Against
Text Classification Models
- URL: http://arxiv.org/abs/2306.13789v1
- Date: Fri, 23 Jun 2023 21:25:38 GMT
- Title: Deconstructing Classifiers: Towards A Data Reconstruction Attack Against
Text Classification Models
- Authors: Adel Elmahdy, Ahmed Salem
- Abstract summary: We propose a new targeted data reconstruction attack called the Mix And Match attack.
This work highlights the importance of considering the privacy risks associated with data reconstruction attacks in classification models.
- Score: 2.9735729003555345
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Natural language processing (NLP) models have become increasingly popular in
real-world applications, such as text classification. However, they are
vulnerable to privacy attacks, including data reconstruction attacks that aim
to extract the data used to train the model. Most previous studies on data
reconstruction attacks have focused on LLM, while classification models were
assumed to be more secure. In this work, we propose a new targeted data
reconstruction attack called the Mix And Match attack, which takes advantage of
the fact that most classification models are based on LLM. The Mix And Match
attack uses the base model of the target model to generate candidate tokens and
then prunes them using the classification head. We extensively demonstrate the
effectiveness of the attack using both random and organic canaries. This work
highlights the importance of considering the privacy risks associated with data
reconstruction attacks in classification models and offers insights into
possible leakages.
Related papers
- Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Inference Attacks Against Face Recognition Model without Classification
Layers [2.775761045299829]
Face recognition (FR) has been applied to nearly every aspect of daily life, but it is always accompanied by the underlying risk of leaking private information.
In this work, we advocate a novel inference attack composed of two stages for practical FR models without a classification layer.
arXiv Detail & Related papers (2024-01-24T09:51:03Z) - Model Stealing Attack against Recommender System [85.1927483219819]
Some adversarial attacks have achieved model stealing attacks against recommender systems.
In this paper, we constrain the volume of available target data and queries and utilize auxiliary data, which shares the item set with the target data, to promote model stealing attacks.
arXiv Detail & Related papers (2023-12-18T05:28:02Z) - Beyond Labeling Oracles: What does it mean to steal ML models? [52.63413852460003]
Model extraction attacks are designed to steal trained models with only query access.
We investigate factors influencing the success of model extraction attacks.
Our findings urge the community to redefine the adversarial goals of ME attacks.
arXiv Detail & Related papers (2023-10-03T11:10:21Z) - Boosting Model Inversion Attacks with Adversarial Examples [26.904051413441316]
We propose a new training paradigm for a learning-based model inversion attack that can achieve higher attack accuracy in a black-box setting.
First, we regularize the training process of the attack model with an added semantic loss function.
Second, we inject adversarial examples into the training data to increase the diversity of the class-related parts.
arXiv Detail & Related papers (2023-06-24T13:40:58Z) - Membership Inference Attacks against Language Models via Neighbourhood
Comparison [45.086816556309266]
Membership Inference attacks (MIAs) aim to predict whether a data sample was present in the training data of a machine learning model or not.
Recent work has demonstrated that reference-based attacks which compare model scores to those obtained from a reference model trained on similar data can substantially improve the performance of MIAs.
We investigate their performance in more realistic scenarios and find that they are highly fragile in relation to the data distribution used to train reference models.
arXiv Detail & Related papers (2023-05-29T07:06:03Z) - Local Model Reconstruction Attacks in Federated Learning and their Uses [9.14750410129878]
Local model reconstruction attack allows the adversary to trigger other classical attacks in a more effective way.
We propose a novel model-based attribute inference attack in federated learning leveraging the local model reconstruction attack.
Our work provides a new angle for designing powerful and explainable attacks to effectively quantify the privacy risk in FL.
arXiv Detail & Related papers (2022-10-28T15:27:03Z) - Reconstructing Training Data with Informed Adversaries [30.138217209991826]
Given access to a machine learning model, can an adversary reconstruct the model's training data?
This work studies this question from the lens of a powerful informed adversary who knows all the training data points except one.
We show it is feasible to reconstruct the remaining data point in this stringent threat model.
arXiv Detail & Related papers (2022-01-13T09:19:25Z) - Knowledge-Enriched Distributional Model Inversion Attacks [49.43828150561947]
Model inversion (MI) attacks are aimed at reconstructing training data from model parameters.
We present a novel inversion-specific GAN that can better distill knowledge useful for performing attacks on private models from public data.
Our experiments show that the combination of these techniques can significantly boost the success rate of the state-of-the-art MI attacks by 150%.
arXiv Detail & Related papers (2020-10-08T16:20:48Z) - Learning to Attack: Towards Textual Adversarial Attacking in Real-world
Situations [81.82518920087175]
Adversarial attacking aims to fool deep neural networks with adversarial examples.
We propose a reinforcement learning based attack model, which can learn from attack history and launch attacks more efficiently.
arXiv Detail & Related papers (2020-09-19T09:12:24Z) - Adversarial Imitation Attack [63.76805962712481]
A practical adversarial attack should require as little as possible knowledge of attacked models.
Current substitute attacks need pre-trained models to generate adversarial examples.
In this study, we propose a novel adversarial imitation attack.
arXiv Detail & Related papers (2020-03-28T10:02:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.