Human intuition as a defense against attribute inference
- URL: http://arxiv.org/abs/2304.11853v1
- Date: Mon, 24 Apr 2023 06:54:17 GMT
- Title: Human intuition as a defense against attribute inference
- Authors: Marcin Waniek, Navya Suri, Abdullah Zameek, Bedoor AlShebli, Talal
Rahwan
- Abstract summary: Attribute inference has become a major threat to privacy.
One way to tackle this threat is to strategically modify one's publicly available data in order to keep one's private information hidden from attribute inference.
We evaluate people's ability to perform this task, and compare it against algorithms designed for this purpose.
- Score: 4.916067949075847
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Attribute inference - the process of analyzing publicly available data in
order to uncover hidden information - has become a major threat to privacy,
given the recent technological leap in machine learning. One way to tackle this
threat is to strategically modify one's publicly available data in order to
keep one's private information hidden from attribute inference. We evaluate
people's ability to perform this task, and compare it against algorithms
designed for this purpose. We focus on three attributes: the gender of the
author of a piece of text, the country in which a set of photos was taken, and
the link missing from a social network. For each of these attributes, we find
that people's effectiveness is inferior to that of AI, especially when it comes
to hiding the attribute in question. Moreover, when people are asked to modify
the publicly available information in order to hide these attributes, they are
less likely to make high-impact modifications compared to AI. This suggests
that people are unable to recognize the aspects of the data that are critical
to an inference algorithm. Taken together, our findings highlight the
limitations of relying on human intuition to protect privacy in the age of AI,
and emphasize the need for algorithmic support to protect private information
from attribute inference.
Related papers
- Activity Recognition on Avatar-Anonymized Datasets with Masked Differential Privacy [64.32494202656801]
Privacy-preserving computer vision is an important emerging problem in machine learning and artificial intelligence.
We present anonymization pipeline that replaces sensitive human subjects in video datasets with synthetic avatars within context.
We also proposeMaskDP to protect non-anonymized but privacy sensitive background information.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - IDT: Dual-Task Adversarial Attacks for Privacy Protection [8.312362092693377]
Methods to protect privacy can involve using representations inside models that are not to detect sensitive attributes.
We propose IDT, a method that analyses predictions made by auxiliary and interpretable models to identify which tokens are important to change.
We evaluate different datasets for NLP suitable for different tasks.
arXiv Detail & Related papers (2024-06-28T04:14:35Z) - TeD-SPAD: Temporal Distinctiveness for Self-supervised
Privacy-preservation for video Anomaly Detection [59.04634695294402]
Video anomaly detection (VAD) without human monitoring is a complex computer vision task.
Privacy leakage in VAD allows models to pick up and amplify unnecessary biases related to people's personal information.
We propose TeD-SPAD, a privacy-aware video anomaly detection framework that destroys visual private information in a self-supervised manner.
arXiv Detail & Related papers (2023-08-21T22:42:55Z) - Adversary for Social Good: Leveraging Adversarial Attacks to Protect
Personal Attribute Privacy [14.395031313422214]
We leverage the inherent vulnerability of machine learning to adversarial attacks, and design a novel text-space Adversarial attack for Social Good, called Adv4SG.
Our method can effectively degrade the inference accuracy with less computational cost over different attribute settings, which substantially helps mitigate the impacts of inference attacks and thus achieve high performance in user attribute privacy protection.
arXiv Detail & Related papers (2023-06-04T21:40:23Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Training privacy-preserving video analytics pipelines by suppressing
features that reveal information about private attributes [40.31692020706419]
We consider an adversary with access to the features extracted by a deployed deep neural network and use these features to predict private attributes.
We modify the training of the network using a confusion loss that encourages the extraction of features that make it difficult for the adversary to accurately predict private attributes.
Results show that, compared to the original network, the proposed PrivateNet can reduce the leakage of private information of a state-of-the-art emotion recognition by 2.88% for gender and by 13.06% for age group.
arXiv Detail & Related papers (2022-03-05T01:31:07Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z) - More Than Privacy: Applying Differential Privacy in Key Areas of
Artificial Intelligence [62.3133247463974]
We show that differential privacy can do more than just privacy preservation in AI.
It can also be used to improve security, stabilize learning, build fair models, and impose composition in selected areas of AI.
arXiv Detail & Related papers (2020-08-05T03:07:36Z) - InfoScrub: Towards Attribute Privacy by Targeted Obfuscation [77.49428268918703]
We study techniques that allow individuals to limit the private information leaked in visual data.
We tackle this problem in a novel image obfuscation framework.
We find our approach generates obfuscated images faithful to the original input images, and additionally increase uncertainty by 6.2$times$ (or up to 0.85 bits) over the non-obfuscated counterparts.
arXiv Detail & Related papers (2020-05-20T19:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.