Experts-in-the-Loop: Establishing an Effective Workflow in Crafting
Privacy Q&A
- URL: http://arxiv.org/abs/2311.11161v1
- Date: Sat, 18 Nov 2023 20:32:59 GMT
- Title: Experts-in-the-Loop: Establishing an Effective Workflow in Crafting
Privacy Q&A
- Authors: Zahra Kolagar, Anna Katharina Leschanowsky, Birgit Popp
- Abstract summary: We propose a dynamic workflow for transforming privacy policies into privacy question-and-answer (Q&A) pairs.
Thereby, we facilitate interdisciplinary collaboration among legal experts and conversation designers.
Our proposed workflow underscores continuous improvement and monitoring throughout the construction of privacy Q&As.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Privacy policies play a vital role in safeguarding user privacy as legal
jurisdictions worldwide emphasize the need for transparent data processing.
While the suitability of privacy policies to enhance transparency has been
critically discussed, employing conversational AI systems presents unique
challenges in informing users effectively. In this position paper, we propose a
dynamic workflow for transforming privacy policies into privacy
question-and-answer (Q&A) pairs to make privacy policies easily accessible
through conversational AI. Thereby, we facilitate interdisciplinary
collaboration among legal experts and conversation designers, while also
considering the utilization of large language models' generative capabilities
and addressing associated challenges. Our proposed workflow underscores
continuous improvement and monitoring throughout the construction of privacy
Q&As, advocating for comprehensive review and refinement through an
experts-in-the-loop approach.
Related papers
- Assessing Privacy Policies with AI: Ethical, Legal, and Technical Challenges [6.916147439085307]
Large Language Models (LLMs) can be used to assess privacy policies for users automatically.
We explore the challenges of this approach in three pillars, namely technical feasibility, ethical implications, and legal compatibility.
Our findings aim to identify potential for future research, and to foster a discussion on the use of LLM technologies.
arXiv Detail & Related papers (2024-10-10T21:36:35Z) - AI Delegates with a Dual Focus: Ensuring Privacy and Strategic Self-Disclosure [42.96087647326612]
We conduct a pilot study to investigate user preferences for AI delegates across various social relations and task scenarios.
We then propose a novel AI delegate system that enables privacy-conscious self-disclosure.
Our user study demonstrates that the proposed AI delegate strategically protects privacy, pioneering its use in diverse and dynamic social interactions.
arXiv Detail & Related papers (2024-09-26T08:45:15Z) - Operationalizing Contextual Integrity in Privacy-Conscious Assistants [34.70330533067581]
We propose to operationalize contextual integrity (CI) to steer advanced AI assistants to behave in accordance with privacy expectations.
In particular, we design and evaluate a number of strategies to steer assistants' information-sharing actions to be CI compliant.
Our evaluation is based on a novel form filling benchmark composed of human annotations of common webform applications.
arXiv Detail & Related papers (2024-08-05T10:53:51Z) - Collection, usage and privacy of mobility data in the enterprise and public administrations [55.2480439325792]
Security measures such as anonymization are needed to protect individuals' privacy.
Within our study, we conducted expert interviews to gain insights into practices in the field.
We survey privacy-enhancing methods in use, which generally do not comply with state-of-the-art standards of differential privacy.
arXiv Detail & Related papers (2024-07-04T08:29:27Z) - Privacy Risks of General-Purpose AI Systems: A Foundation for Investigating Practitioner Perspectives [47.17703009473386]
Powerful AI models have led to impressive leaps in performance across a wide range of tasks.
Privacy concerns have led to a wealth of literature covering various privacy risks and vulnerabilities of AI models.
We conduct a systematic review of these survey papers to provide a concise and usable overview of privacy risks in GPAIS.
arXiv Detail & Related papers (2024-07-02T07:49:48Z) - Centering Policy and Practice: Research Gaps around Usable Differential Privacy [12.340264479496375]
We argue that while differential privacy is a clean formulation in theory, it poses significant challenges in practice.
To bridge the gaps between differential privacy's promises and its real-world usability, researchers and practitioners must work together.
arXiv Detail & Related papers (2024-06-17T21:32:30Z) - GoldCoin: Grounding Large Language Models in Privacy Laws via Contextual Integrity Theory [44.297102658873726]
Existing research studies privacy by exploring various privacy attacks, defenses, and evaluations within narrowly predefined patterns.
We introduce a novel framework, GoldCoin, designed to efficiently ground LLMs in privacy laws for judicial assessing privacy violations.
Our framework leverages the theory of contextual integrity as a bridge, creating numerous synthetic scenarios grounded in relevant privacy statutes.
arXiv Detail & Related papers (2024-06-17T02:27:32Z) - Advancing Differential Privacy: Where We Are Now and Future Directions for Real-World Deployment [100.1798289103163]
We present a detailed review of current practices and state-of-the-art methodologies in the field of differential privacy (DP)
Key points and high-level contents of the article were originated from the discussions from "Differential Privacy (DP): Challenges Towards the Next Frontier"
This article aims to provide a reference point for the algorithmic and design decisions within the realm of privacy, highlighting important challenges and potential research directions.
arXiv Detail & Related papers (2023-04-14T05:29:18Z) - PLUE: Language Understanding Evaluation Benchmark for Privacy Policies
in English [77.79102359580702]
We introduce the Privacy Policy Language Understanding Evaluation benchmark, a multi-task benchmark for evaluating the privacy policy language understanding.
We also collect a large corpus of privacy policies to enable privacy policy domain-specific language model pre-training.
We demonstrate that domain-specific continual pre-training offers performance improvements across all tasks.
arXiv Detail & Related papers (2022-12-20T05:58:32Z) - Differentially Private Multi-Agent Planning for Logistic-like Problems [70.3758644421664]
This paper proposes a novel strong privacy-preserving planning approach for logistic-like problems.
Two challenges are addressed: 1) simultaneously achieving strong privacy, completeness and efficiency, and 2) addressing communication constraints.
To the best of our knowledge, this paper is the first to apply differential privacy to the field of multi-agent planning.
arXiv Detail & Related papers (2020-08-16T03:43:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.