Adversary for Social Good: Leveraging Adversarial Attacks to Protect
Personal Attribute Privacy
- URL: http://arxiv.org/abs/2306.02488v1
- Date: Sun, 4 Jun 2023 21:40:23 GMT
- Title: Adversary for Social Good: Leveraging Adversarial Attacks to Protect
Personal Attribute Privacy
- Authors: Xiaoting Li, Lingwei Chen, Dinghao Wu
- Abstract summary: We leverage the inherent vulnerability of machine learning to adversarial attacks, and design a novel text-space Adversarial attack for Social Good, called Adv4SG.
Our method can effectively degrade the inference accuracy with less computational cost over different attribute settings, which substantially helps mitigate the impacts of inference attacks and thus achieve high performance in user attribute privacy protection.
- Score: 14.395031313422214
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Social media has drastically reshaped the world that allows billions of
people to engage in such interactive environments to conveniently create and
share content with the public. Among them, text data (e.g., tweets, blogs)
maintains the basic yet important social activities and generates a rich source
of user-oriented information. While those explicit sensitive user data like
credentials has been significantly protected by all means, personal private
attribute (e.g., age, gender, location) disclosure due to inference attacks is
somehow challenging to avoid, especially when powerful natural language
processing (NLP) techniques have been effectively deployed to automate
attribute inferences from implicit text data. This puts users' attribute
privacy at risk. To address this challenge, in this paper, we leverage the
inherent vulnerability of machine learning to adversarial attacks, and design a
novel text-space Adversarial attack for Social Good, called Adv4SG. In other
words, we cast the problem of protecting personal attribute privacy as an
adversarial attack formulation problem over the social media text data to
defend against NLP-based attribute inference attacks. More specifically, Adv4SG
proceeds with a sequence of word perturbations under given constraints such
that the probed attribute cannot be identified correctly. Different from the
prior works, we advance Adv4SG by considering social media property, and
introducing cost-effective mechanisms to expedite attribute obfuscation over
text data under the black-box setting. Extensive experiments on real-world
social media datasets have demonstrated that our method can effectively degrade
the inference accuracy with less computational cost over different attribute
settings, which substantially helps mitigate the impacts of inference attacks
and thus achieve high performance in user attribute privacy protection.
Related papers
- Learning Robust and Privacy-Preserving Representations via Information Theory [21.83308540799076]
We take the first step to mitigate both the security and privacy attacks, and maintain task utility as well.
We propose an information-theoretic framework to achieve the goals through the lens of representation learning.
arXiv Detail & Related papers (2024-12-15T05:51:48Z) - Activity Recognition on Avatar-Anonymized Datasets with Masked Differential Privacy [64.32494202656801]
Privacy-preserving computer vision is an important emerging problem in machine learning and artificial intelligence.
We present anonymization pipeline that replaces sensitive human subjects in video datasets with synthetic avatars within context.
We also proposeMaskDP to protect non-anonymized but privacy sensitive background information.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Bayes-Nash Generative Privacy Against Membership Inference Attacks [24.330984323956173]
Membership inference attacks (MIAs) expose significant privacy risks by determining whether an individual's data is in a dataset.
We propose a game-theoretic framework that models privacy protection from MIA as a Bayesian game between a defender and an attacker.
We call the defender's data sharing policy thereby obtained Bayes-Nash Generative Privacy (BNGP)
arXiv Detail & Related papers (2024-10-09T20:29:04Z) - IncogniText: Privacy-enhancing Conditional Text Anonymization via LLM-based Private Attribute Randomization [8.483679748399037]
We propose IncogniText, a technique that anonymizes the text to mislead a potential adversary into predicting a wrong private attribute value.
Our empirical evaluation shows a reduction of private attribute leakage by more than 90% across 8 different private attributes.
arXiv Detail & Related papers (2024-07-03T09:49:03Z) - NAP^2: A Benchmark for Naturalness and Privacy-Preserving Text Rewriting by Learning from Human [55.20137833039499]
We suggest sanitizing sensitive text using two common strategies used by humans.
We curate the first corpus, coined NAP2, through both crowdsourcing and the use of large language models.
arXiv Detail & Related papers (2024-06-06T05:07:44Z) - Secure Aggregation is Not Private Against Membership Inference Attacks [66.59892736942953]
We investigate the privacy implications of SecAgg in federated learning.
We show that SecAgg offers weak privacy against membership inference attacks even in a single training round.
Our findings underscore the imperative for additional privacy-enhancing mechanisms, such as noise injection.
arXiv Detail & Related papers (2024-03-26T15:07:58Z) - Human intuition as a defense against attribute inference [4.916067949075847]
Attribute inference has become a major threat to privacy.
One way to tackle this threat is to strategically modify one's publicly available data in order to keep one's private information hidden from attribute inference.
We evaluate people's ability to perform this task, and compare it against algorithms designed for this purpose.
arXiv Detail & Related papers (2023-04-24T06:54:17Z) - User-Centered Security in Natural Language Processing [0.7106986689736825]
dissertation proposes a framework of user-centered security in Natural Language Processing (NLP)
It focuses on two security domains within NLP with great public interest.
arXiv Detail & Related papers (2023-01-10T22:34:19Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Cross-Network Social User Embedding with Hybrid Differential Privacy
Guarantees [81.6471440778355]
We propose a Cross-network Social User Embedding framework, namely DP-CroSUE, to learn the comprehensive representations of users in a privacy-preserving way.
In particular, for each heterogeneous social network, we first introduce a hybrid differential privacy notion to capture the variation of privacy expectations for heterogeneous data types.
To further enhance user embeddings, a novel cross-network GCN embedding model is designed to transfer knowledge across networks through those aligned users.
arXiv Detail & Related papers (2022-09-04T06:22:37Z) - Attribute Inference Attack of Speech Emotion Recognition in Federated
Learning Settings [56.93025161787725]
Federated learning (FL) is a distributed machine learning paradigm that coordinates clients to train a model collaboratively without sharing local data.
We propose an attribute inference attack framework that infers sensitive attribute information of the clients from shared gradients or model parameters.
We show that the attribute inference attack is achievable for SER systems trained using FL.
arXiv Detail & Related papers (2021-12-26T16:50:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.