Unraveling Privacy Threat Modeling Complexity: Conceptual Privacy Analysis Layers
- URL: http://arxiv.org/abs/2408.03578v1
- Date: Wed, 7 Aug 2024 06:30:20 GMT
- Title: Unraveling Privacy Threat Modeling Complexity: Conceptual Privacy Analysis Layers
- Authors: Kim Wuyts, Avi Douglen,
- Abstract summary: Analyzing privacy threats in software products is an essential part of software development to ensure systems are privacy-respecting.
We propose to use four conceptual layers (feature, ecosystem, business context, and environment) to capture this privacy complexity.
These layers can be used as a frame to structure and specify the privacy analysis support in a more tangible and actionable way.
- Score: 0.7918886297003017
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Analyzing privacy threats in software products is an essential part of software development to ensure systems are privacy-respecting; yet it is still a far from trivial activity. While there have been many advancements in the past decade, they tend to focus on describing 'what' the threats are. What isn't entirely clear yet is 'how' to actually find these threats. Privacy is a complex domain. We propose to use four conceptual layers (feature, ecosystem, business context, and environment) to capture this privacy complexity. These layers can be used as a frame to structure and specify the privacy analysis support in a more tangible and actionable way, thereby improving applicability of the analysis process.
Related papers
- PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action [54.11479432110771]
PrivacyLens is a novel framework designed to extend privacy-sensitive seeds into expressive vignettes and further into agent trajectories.
We instantiate PrivacyLens with a collection of privacy norms grounded in privacy literature and crowdsourced seeds.
State-of-the-art LMs, like GPT-4 and Llama-3-70B, leak sensitive information in 25.68% and 38.69% of cases, even when prompted with privacy-enhancing instructions.
arXiv Detail & Related papers (2024-08-29T17:58:38Z) - Privacy-preserving Optics for Enhancing Protection in Face De-identification [60.110274007388135]
We propose a hardware-level face de-identification method to solve this vulnerability.
We also propose an anonymization framework that generates a new face using the privacy-preserving image, face heatmap, and a reference face image from a public dataset as input.
arXiv Detail & Related papers (2024-03-31T19:28:04Z) - Privacy Protectability: An Information-theoretical Approach [4.14084373472438]
We propose a new metric, textitprivacy protectability, to characterize to what degree a video stream can be protected.
Our definition of privacy protectability is rooted in information theory and we develop efficient algorithms to estimate the metric.
arXiv Detail & Related papers (2023-05-25T04:06:55Z) - Privacy Explanations - A Means to End-User Trust [64.7066037969487]
We looked into how explainability might help to tackle this problem.
We created privacy explanations that aim to help to clarify to end users why and for what purposes specific data is required.
Our findings reveal that privacy explanations can be an important step towards increasing trust in software systems.
arXiv Detail & Related papers (2022-10-18T09:30:37Z) - The Evolving Path of "the Right to Be Left Alone" - When Privacy Meets
Technology [0.0]
This paper proposes a novel vision of the privacy ecosystem, introducing privacy dimensions, the related users' expectations, the privacy violations, and the changing factors.
We believe that promising approaches to tackle the privacy challenges move in two directions: (i) identification of effective privacy metrics; and (ii) adoption of formal tools to design privacy-compliant applications.
arXiv Detail & Related papers (2021-11-24T11:27:55Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z) - Target Privacy Threat Modeling for COVID-19 Exposure Notification
Systems [8.080564346335542]
Digital contact tracing (DCT) technology has helped to slow the spread of infectious disease.
To support both ethical technology deployment and user adoption, privacy must be at the forefront.
With the loss of privacy being a critical threat, thorough threat modeling will help us to strategize and protect privacy as DCT technologies advance.
arXiv Detail & Related papers (2020-09-25T02:09:51Z) - More Than Privacy: Applying Differential Privacy in Key Areas of
Artificial Intelligence [62.3133247463974]
We show that differential privacy can do more than just privacy preservation in AI.
It can also be used to improve security, stabilize learning, build fair models, and impose composition in selected areas of AI.
arXiv Detail & Related papers (2020-08-05T03:07:36Z) - Privacy threats in intimate relationships [0.720851507101878]
This article provides an overview of intimate threats: a class of privacy threats that can arise within our families, romantic partnerships, close friendships, and caregiving relationships.
We survey a range of intimate relationships and describe their common features.
Based on these features, we explore implications for both technical privacy design and policy, and offer design recommendations for ameliorating intimate privacy risks.
arXiv Detail & Related papers (2020-06-06T16:21:14Z) - InfoScrub: Towards Attribute Privacy by Targeted Obfuscation [77.49428268918703]
We study techniques that allow individuals to limit the private information leaked in visual data.
We tackle this problem in a novel image obfuscation framework.
We find our approach generates obfuscated images faithful to the original input images, and additionally increase uncertainty by 6.2$times$ (or up to 0.85 bits) over the non-obfuscated counterparts.
arXiv Detail & Related papers (2020-05-20T19:48:04Z) - Privacy at Scale: Introducing the PrivaSeer Corpus of Web Privacy Policies [13.09699710197036]
We create PrivaSeer, a corpus of over one million English language website privacy policies.
We show results from readability tests, document similarity, keyphrase extraction, and explored the corpus through topic modeling.
arXiv Detail & Related papers (2020-04-23T13:21:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.