GenAIPABench: A Benchmark for Generative AI-based Privacy Assistants
- URL: http://arxiv.org/abs/2309.05138v3
- Date: Tue, 19 Dec 2023 00:40:51 GMT
- Title: GenAIPABench: A Benchmark for Generative AI-based Privacy Assistants
- Authors: Aamir Hamid, Hemanth Reddy Samidi, Tim Finin, Primal Pappachan,
Roberto Yus
- Abstract summary: This study introduces GenAIPABench, a benchmark for evaluating Generative AI-based Privacy Assistants (GenAIPAs)
GenAIPABench includes: 1) A set of questions about privacy policies and data protection regulations, with annotated answers for various organizations and regulations; 2) Metrics to assess the accuracy, relevance, and consistency of responses; and 3) A tool for generating prompts to introduce privacy documents and varied privacy questions to test system robustness.
We evaluated three leading genAI systems ChatGPT-4, Bard, and Bing AI using GenAIPABench to gauge their effectiveness as GenAIPAs.
- Score: 1.2642388972233845
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Privacy policies of websites are often lengthy and intricate. Privacy
assistants assist in simplifying policies and making them more accessible and
user friendly. The emergence of generative AI (genAI) offers new opportunities
to build privacy assistants that can answer users questions about privacy
policies. However, genAIs reliability is a concern due to its potential for
producing inaccurate information. This study introduces GenAIPABench, a
benchmark for evaluating Generative AI-based Privacy Assistants (GenAIPAs).
GenAIPABench includes: 1) A set of questions about privacy policies and data
protection regulations, with annotated answers for various organizations and
regulations; 2) Metrics to assess the accuracy, relevance, and consistency of
responses; and 3) A tool for generating prompts to introduce privacy documents
and varied privacy questions to test system robustness. We evaluated three
leading genAI systems ChatGPT-4, Bard, and Bing AI using GenAIPABench to gauge
their effectiveness as GenAIPAs. Our results demonstrate significant promise in
genAI capabilities in the privacy domain while also highlighting challenges in
managing complex queries, ensuring consistency, and verifying source accuracy.
Related papers
- Privacy Risks and Preservation Methods in Explainable Artificial Intelligence: A Scoping Review [1.2744523252873352]
We conduct a scoping review of existing literature to elicit details on the conflict between privacy and explainability.<n>We extracted 57 articles from 1,943 studies published from January 2019 to December 2024.<n>We categorize the privacy risks and preservation methods in XAI and propose the characteristics of privacy preserving explanations.
arXiv Detail & Related papers (2025-05-05T17:53:28Z) - Privacy Preservation in Gen AI Applications [0.0]
Large Language Models (LLMs) may unintentionally absorb and reveal Personally Identifiable Information (PII) from user interactions.
Deep neural networks' intricacy makes it difficult to track down or stop the inadvertent storing and release of private information.
This study tackles these issues by detecting Generative AI weaknesses through attacks such as data extraction, model inversion, and membership inference.
It ensures privacy without sacrificing functionality by using methods to identify, alter, or remove PII before to dealing with LLMs.
arXiv Detail & Related papers (2025-04-12T06:19:37Z) - "So what if I used GenAI?" -- Implications of Using Cloud-based GenAI in Software Engineering Research [0.0]
This paper sheds light on the various research aspects in which GenAI is used, thus raising awareness of its legal implications to novice and budding researchers.
We summarize key aspects regarding our current knowledge that every software researcher involved in using GenAI should be aware of to avoid critical mistakes that may expose them to liability claims.
arXiv Detail & Related papers (2024-12-10T06:18:15Z) - Smoke Screens and Scapegoats: The Reality of General Data Protection Regulation Compliance -- Privacy and Ethics in the Case of Replika AI [1.325665193924634]
This paper takes a critical approach towards examining the intricacies of these issues within AI companion services.
We analyze articles from public media about the company and its practices to gain insight into the trustworthiness of information provided in the policy.
The results reveal despite privacy notices, data collection practices might harvest personal data without users' full awareness.
arXiv Detail & Related papers (2024-11-07T07:36:19Z) - PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action [54.11479432110771]
PrivacyLens is a novel framework designed to extend privacy-sensitive seeds into expressive vignettes and further into agent trajectories.
We instantiate PrivacyLens with a collection of privacy norms grounded in privacy literature and crowdsourced seeds.
State-of-the-art LMs, like GPT-4 and Llama-3-70B, leak sensitive information in 25.68% and 38.69% of cases, even when prompted with privacy-enhancing instructions.
arXiv Detail & Related papers (2024-08-29T17:58:38Z) - How to build trust in answers given by Generative AI for specific, and vague, financial questions [0.0]
Building trust for consumers is different when they ask a specific financial question in comparison to a vague one.
Four ways to build trust in both scenarios are (2) human oversight and being in the loop, (3) transparency and control, (4) accuracy and usefulness and finally (5) ease of use and support.
arXiv Detail & Related papers (2024-08-26T19:26:48Z) - Privacy Checklist: Privacy Violation Detection Grounding on Contextual Integrity Theory [43.12744258781724]
We formulate the privacy issue as a reasoning problem rather than simple pattern matching.
We develop the first comprehensive checklist that covers social identities, private attributes, and existing privacy regulations.
arXiv Detail & Related papers (2024-08-19T14:48:04Z) - Privacy Risks of General-Purpose AI Systems: A Foundation for Investigating Practitioner Perspectives [47.17703009473386]
Powerful AI models have led to impressive leaps in performance across a wide range of tasks.
Privacy concerns have led to a wealth of literature covering various privacy risks and vulnerabilities of AI models.
We conduct a systematic review of these survey papers to provide a concise and usable overview of privacy risks in GPAIS.
arXiv Detail & Related papers (2024-07-02T07:49:48Z) - Generative AI for Secure and Privacy-Preserving Mobile Crowdsensing [74.58071278710896]
generative AI has attracted much attention from both academic and industrial fields.
Secure and privacy-preserving mobile crowdsensing (SPPMCS) has been widely applied in data collection/ acquirement.
arXiv Detail & Related papers (2024-05-17T04:00:58Z) - The Ethics of Advanced AI Assistants [53.89899371095332]
This paper focuses on the opportunities and the ethical and societal risks posed by advanced AI assistants.
We define advanced AI assistants as artificial agents with natural language interfaces, whose function is to plan and execute sequences of actions on behalf of a user.
We consider the deployment of advanced assistants at a societal scale, focusing on cooperation, equity and access, misinformation, economic impact, the environment and how best to evaluate advanced AI assistants.
arXiv Detail & Related papers (2024-04-24T23:18:46Z) - A Survey of Privacy-Preserving Model Explanations: Privacy Risks, Attacks, and Countermeasures [50.987594546912725]
Despite a growing corpus of research in AI privacy and explainability, there is little attention on privacy-preserving model explanations.
This article presents the first thorough survey about privacy attacks on model explanations and their countermeasures.
arXiv Detail & Related papers (2024-03-31T12:44:48Z) - Security and Privacy on Generative Data in AIGC: A Survey [17.456578314457612]
We review the security and privacy on generative data in AIGC.
We reveal the successful experiences of state-of-the-art countermeasures in terms of the foundational properties of privacy, controllability, authenticity, and compliance.
arXiv Detail & Related papers (2023-09-18T02:35:24Z) - Challenges and Remedies to Privacy and Security in AIGC: Exploring the
Potential of Privacy Computing, Blockchain, and Beyond [17.904983070032884]
We review the concept, classification and underlying technologies of AIGC.
We discuss the privacy and security challenges faced by AIGC from multiple perspectives.
arXiv Detail & Related papers (2023-06-01T07:49:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.