Privacy threats in intimate relationships
- URL: http://arxiv.org/abs/2006.03907v1
- Date: Sat, 6 Jun 2020 16:21:14 GMT
- Title: Privacy threats in intimate relationships
- Authors: Karen Levy, Bruce Schneier
- Abstract summary: This article provides an overview of intimate threats: a class of privacy threats that can arise within our families, romantic partnerships, close friendships, and caregiving relationships.
We survey a range of intimate relationships and describe their common features.
Based on these features, we explore implications for both technical privacy design and policy, and offer design recommendations for ameliorating intimate privacy risks.
- Score: 0.720851507101878
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This article provides an overview of intimate threats: a class of privacy
threats that can arise within our families, romantic partnerships, close
friendships, and caregiving relationships. Many common assumptions about
privacy are upended in the context of these relationships, and many otherwise
effective protective measures fail when applied to intimate threats. Those
closest to us know the answers to our secret questions, have access to our
devices, and can exercise coercive power over us. We survey a range of intimate
relationships and describe their common features. Based on these features, we
explore implications for both technical privacy design and policy, and offer
design recommendations for ameliorating intimate privacy risks.
Related papers
- PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action [54.11479432110771]
PrivacyLens is a novel framework designed to extend privacy-sensitive seeds into expressive vignettes and further into agent trajectories.
We instantiate PrivacyLens with a collection of privacy norms grounded in privacy literature and crowdsourced seeds.
State-of-the-art LMs, like GPT-4 and Llama-3-70B, leak sensitive information in 25.68% and 38.69% of cases, even when prompted with privacy-enhancing instructions.
arXiv Detail & Related papers (2024-08-29T17:58:38Z) - Unveiling Privacy Vulnerabilities: Investigating the Role of Structure in Graph Data [17.11821761700748]
This study advances the understanding and protection against privacy risks emanating from network structure.
We develop a novel graph private attribute inference attack, which acts as a pivotal tool for evaluating the potential for privacy leakage through network structures.
Our attack model poses a significant threat to user privacy, and our graph data publishing method successfully achieves the optimal privacy-utility trade-off.
arXiv Detail & Related papers (2024-07-26T07:40:54Z) - Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory [82.7042006247124]
We show that even the most capable AI models reveal private information in contexts that humans would not, 39% and 57% of the time, respectively.
Our work underscores the immediate need to explore novel inference-time privacy-preserving approaches, based on reasoning and theory of mind.
arXiv Detail & Related papers (2023-10-27T04:15:30Z) - Privacy in Speech Technology [8.99795279111323]
This paper is a tutorial on privacy issues related to speech technology.
Models threats, approaches for protecting users' privacy, measuring the performance of privacy-protecting methods.
Also presents lines for further development where improvements are most urgently needed.
arXiv Detail & Related papers (2023-05-09T07:41:36Z) - Privacy Explanations - A Means to End-User Trust [64.7066037969487]
We looked into how explainability might help to tackle this problem.
We created privacy explanations that aim to help to clarify to end users why and for what purposes specific data is required.
Our findings reveal that privacy explanations can be an important step towards increasing trust in software systems.
arXiv Detail & Related papers (2022-10-18T09:30:37Z) - PrivHAR: Recognizing Human Actions From Privacy-preserving Lens [58.23806385216332]
We propose an optimizing framework to provide robust visual privacy protection along the human action recognition pipeline.
Our framework parameterizes the camera lens to successfully degrade the quality of the videos to inhibit privacy attributes and protect against adversarial attacks.
arXiv Detail & Related papers (2022-06-08T13:43:29Z) - "Am I Private and If So, how Many?" -- Using Risk Communication Formats
for Making Differential Privacy Understandable [0.0]
We adapt risk communication formats in conjunction with a model for the privacy risks of Differential Privacy.
We evaluate these novel privacy communication formats in a crowdsourced study.
arXiv Detail & Related papers (2022-04-08T13:30:07Z) - The Evolving Path of "the Right to Be Left Alone" - When Privacy Meets
Technology [0.0]
This paper proposes a novel vision of the privacy ecosystem, introducing privacy dimensions, the related users' expectations, the privacy violations, and the changing factors.
We believe that promising approaches to tackle the privacy challenges move in two directions: (i) identification of effective privacy metrics; and (ii) adoption of formal tools to design privacy-compliant applications.
arXiv Detail & Related papers (2021-11-24T11:27:55Z) - On Privacy and Confidentiality of Communications in Organizational
Graphs [3.5270468102327004]
This work shows how confidentiality is distinct from privacy in an enterprise context.
It aims to formulate an approach to preserving confidentiality while leveraging principles from differential privacy.
arXiv Detail & Related papers (2021-05-27T19:45:56Z) - Differentially Private Multi-Agent Planning for Logistic-like Problems [70.3758644421664]
This paper proposes a novel strong privacy-preserving planning approach for logistic-like problems.
Two challenges are addressed: 1) simultaneously achieving strong privacy, completeness and efficiency, and 2) addressing communication constraints.
To the best of our knowledge, this paper is the first to apply differential privacy to the field of multi-agent planning.
arXiv Detail & Related papers (2020-08-16T03:43:09Z) - InfoScrub: Towards Attribute Privacy by Targeted Obfuscation [77.49428268918703]
We study techniques that allow individuals to limit the private information leaked in visual data.
We tackle this problem in a novel image obfuscation framework.
We find our approach generates obfuscated images faithful to the original input images, and additionally increase uncertainty by 6.2$times$ (or up to 0.85 bits) over the non-obfuscated counterparts.
arXiv Detail & Related papers (2020-05-20T19:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.