PEAK: Explainable Privacy Assistant through Automated Knowledge
Extraction
- URL: http://arxiv.org/abs/2301.02079v2
- Date: Wed, 31 May 2023 15:55:58 GMT
- Title: PEAK: Explainable Privacy Assistant through Automated Knowledge
Extraction
- Authors: Gonul Ayci, Arzucan \"Ozg\"ur, Murat \c{S}ensoy, P{\i}nar Yolum
- Abstract summary: This paper presents a privacy assistant for generating explanations for privacy decisions.
The generated explanations can be used by users to understand the recommendations of the privacy assistant.
We show how this can be realized by incorporating the generated explanations into a state-of-the-art privacy assistant.
- Score: 1.0609815608017064
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the realm of online privacy, privacy assistants play a pivotal role in
empowering users to manage their privacy effectively. Although recent studies
have shown promising progress in tackling tasks such as privacy violation
detection and personalized privacy recommendations, a crucial aspect for
widespread user adoption is the capability of these systems to provide
explanations for their decision-making processes. This paper presents a privacy
assistant for generating explanations for privacy decisions. The privacy
assistant focuses on discovering latent topics, identifying explanation
categories, establishing explanation schemes, and generating automated
explanations. The generated explanations can be used by users to understand the
recommendations of the privacy assistant. Our user study of real-world privacy
dataset of images shows that users find the generated explanations useful and
easy to understand. Additionally, the generated explanations can be used by
privacy assistants themselves to improve their decision-making. We show how
this can be realized by incorporating the generated explanations into a
state-of-the-art privacy assistant.
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Models Matter: Setting Accurate Privacy Expectations for Local and Central Differential Privacy [14.40391109414476]
We design and evaluate new explanations of differential privacy for the local and central models.
We find that consequences-focused explanations in the style of privacy nutrition labels are a promising approach for setting accurate privacy expectations.
arXiv Detail & Related papers (2024-08-16T01:21:57Z) - PrivacyRestore: Privacy-Preserving Inference in Large Language Models via Privacy Removal and Restoration [18.11846784025521]
PrivacyRestore is a plug-and-play method to protect the privacy of user inputs during inference.
We create three datasets, covering medical and legal domains, to evaluate the effectiveness of PrivacyRestore.
arXiv Detail & Related papers (2024-06-03T14:57:39Z) - A Survey of Privacy-Preserving Model Explanations: Privacy Risks, Attacks, and Countermeasures [50.987594546912725]
Despite a growing corpus of research in AI privacy and explainability, there is little attention on privacy-preserving model explanations.
This article presents the first thorough survey about privacy attacks on model explanations and their countermeasures.
arXiv Detail & Related papers (2024-03-31T12:44:48Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - Privacy-Preserving Matrix Factorization for Recommendation Systems using
Gaussian Mechanism [2.84279467589473]
We propose a privacy-preserving recommendation system based on the differential privacy framework and matrix factorization.
As differential privacy is a powerful and robust mathematical framework for designing privacy-preserving machine learning algorithms, it is possible to prevent adversaries from extracting sensitive user information.
arXiv Detail & Related papers (2023-04-11T13:50:39Z) - Privacy Explanations - A Means to End-User Trust [64.7066037969487]
We looked into how explainability might help to tackle this problem.
We created privacy explanations that aim to help to clarify to end users why and for what purposes specific data is required.
Our findings reveal that privacy explanations can be an important step towards increasing trust in software systems.
arXiv Detail & Related papers (2022-10-18T09:30:37Z) - A Self-aware Personal Assistant for Making Personalized Privacy
Decisions [3.988307519677766]
This paper proposes a personal assistant that uses deep learning to classify content based on its privacy label.
By factoring in the user's own understanding of privacy, such as risk factors or own labels, the personal assistant can personalize its recommendations per user.
arXiv Detail & Related papers (2022-05-13T10:15:04Z) - SPAct: Self-supervised Privacy Preservation for Action Recognition [73.79886509500409]
Existing approaches for mitigating privacy leakage in action recognition require privacy labels along with the action labels from the video dataset.
Recent developments of self-supervised learning (SSL) have unleashed the untapped potential of the unlabeled data.
We present a novel training framework which removes privacy information from input video in a self-supervised manner without requiring privacy labels.
arXiv Detail & Related papers (2022-03-29T02:56:40Z) - Privacy-Preserving Image Features via Adversarial Affine Subspace
Embeddings [72.68801373979943]
Many computer vision systems require users to upload image features to the cloud for processing and storage.
We propose a new privacy-preserving feature representation.
Compared to the original features, our approach makes it significantly more difficult for an adversary to recover private information.
arXiv Detail & Related papers (2020-06-11T17:29:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.