Privacy Meets Explainability: A Comprehensive Impact Benchmark
- URL: http://arxiv.org/abs/2211.04110v1
- Date: Tue, 8 Nov 2022 09:20:28 GMT
- Title: Privacy Meets Explainability: A Comprehensive Impact Benchmark
- Authors: Saifullah Saifullah, Dominique Mercier, Adriano Lucieri, Andreas
Dengel, Sheraz Ahmed
- Abstract summary: This work is the first to investigate the impact of private learning techniques on generated explanations for Deep Learning-based models.
The findings suggest non-negligible changes in explanations through the introduction of privacy.
- Score: 4.526582372434088
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Since the mid-10s, the era of Deep Learning (DL) has continued to this day,
bringing forth new superlatives and innovations each year. Nevertheless, the
speed with which these innovations translate into real applications lags behind
this fast pace. Safety-critical applications, in particular, underlie strict
regulatory and ethical requirements which need to be taken care of and are
still active areas of debate. eXplainable AI (XAI) and privacy-preserving
machine learning (PPML) are both crucial research fields, aiming at mitigating
some of the drawbacks of prevailing data-hungry black-box models in DL. Despite
brisk research activity in the respective fields, no attention has yet been
paid to their interaction. This work is the first to investigate the impact of
private learning techniques on generated explanations for DL-based models. In
an extensive experimental analysis covering various image and time series
datasets from multiple domains, as well as varying privacy techniques, XAI
methods, and model architectures, the effects of private training on generated
explanations are studied. The findings suggest non-negligible changes in
explanations through the introduction of privacy. Apart from reporting
individual effects of PPML on XAI, the paper gives clear recommendations for
the choice of techniques in real applications. By unveiling the
interdependencies of these pivotal technologies, this work is a first step
towards overcoming the remaining hurdles for practically applicable AI in
safety-critical domains.
Related papers
- A Survey on Machine Unlearning: Techniques and New Emerged Privacy Risks [42.3024294376025]
Machine unlearning is a research hotspot in the field of privacy protection.
Recent researchers have found potential privacy leakages of various of machine unlearning approaches.
We analyze privacy risks in various aspects, including definitions, implementation methods, and real-world applications.
arXiv Detail & Related papers (2024-06-10T11:31:04Z) - Large Language Models: A New Approach for Privacy Policy Analysis at Scale [1.7570777893613145]
This research proposes the application of Large Language Models (LLMs) as an alternative for effectively and efficiently extracting privacy practices from privacy policies at scale.
We leverage well-known LLMs such as ChatGPT and Llama 2, and offer guidance on the optimal design of prompts, parameters, and models.
Using several renowned datasets in the domain as a benchmark, our evaluation validates its exceptional performance, achieving an F1 score exceeding 93%.
arXiv Detail & Related papers (2024-05-31T15:12:33Z) - Evaluating the Effectiveness of Video Anomaly Detection in the Wild: Online Learning and Inference for Real-world Deployment [2.1374208474242815]
Video Anomaly Detection (VAD) identifies unusual activities in video streams, a key technology with broad applications ranging from surveillance to healthcare.
Tackling VAD in real-life settings poses significant challenges due to the dynamic nature of human actions, environmental variations, and domain shifts.
Online learning is a potential strategy to mitigate this issue by allowing models to adapt to new information continuously.
arXiv Detail & Related papers (2024-04-29T14:47:32Z) - The Frontier of Data Erasure: Machine Unlearning for Large Language Models [56.26002631481726]
Large Language Models (LLMs) are foundational to AI advancements.
LLMs pose risks by potentially memorizing and disseminating sensitive, biased, or copyrighted information.
Machine unlearning emerges as a cutting-edge solution to mitigate these concerns.
arXiv Detail & Related papers (2024-03-23T09:26:15Z) - A Closer Look at the Limitations of Instruction Tuning [52.587607091917214]
We show that Instruction Tuning (IT) fails to enhance knowledge or skills in large language models (LLMs)
We also show that popular methods to improve IT do not lead to performance improvements over a simple LoRA fine-tuned model.
Our findings reveal that responses generated solely from pre-trained knowledge consistently outperform responses by models that learn any form of new knowledge from IT on open-source datasets.
arXiv Detail & Related papers (2024-02-03T04:45:25Z) - PrivacyMind: Large Language Models Can Be Contextual Privacy Protection Learners [81.571305826793]
We introduce Contextual Privacy Protection Language Models (PrivacyMind)
Our work offers a theoretical analysis for model design and benchmarks various techniques.
In particular, instruction tuning with both positive and negative examples stands out as a promising method.
arXiv Detail & Related papers (2023-10-03T22:37:01Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - Your Room is not Private: Gradient Inversion Attack on Reinforcement
Learning [47.96266341738642]
Privacy emerges as a pivotal concern within the realm of embodied AI, as the robot accesses substantial personal information.
This paper proposes an attack on the value-based algorithm and the gradient-based algorithm, utilizing gradient inversion to reconstruct states, actions, and supervision signals.
arXiv Detail & Related papers (2023-06-15T16:53:26Z) - Just Label What You Need: Fine-Grained Active Selection for Perception
and Prediction through Partially Labeled Scenes [78.23907801786827]
We introduce generalizations that ensure that our approach is both cost-aware and allows for fine-grained selection of examples through partially labeled scenes.
Our experiments on a real-world, large-scale self-driving dataset suggest that fine-grained selection can improve the performance across perception, prediction, and downstream planning tasks.
arXiv Detail & Related papers (2021-04-08T17:57:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.