Target Privacy Threat Modeling for COVID-19 Exposure Notification
Systems
- URL: http://arxiv.org/abs/2009.13300v1
- Date: Fri, 25 Sep 2020 02:09:51 GMT
- Title: Target Privacy Threat Modeling for COVID-19 Exposure Notification
Systems
- Authors: Ananya Gangavarapu, Ellie Daw, Abhishek Singh, Rohan Iyer, Gabriel
Harp, Sam Zimmerman, and Ramesh Raskar
- Abstract summary: Digital contact tracing (DCT) technology has helped to slow the spread of infectious disease.
To support both ethical technology deployment and user adoption, privacy must be at the forefront.
With the loss of privacy being a critical threat, thorough threat modeling will help us to strategize and protect privacy as DCT technologies advance.
- Score: 8.080564346335542
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The adoption of digital contact tracing (DCT) technology during the
COVID-19pandemic has shown multiple benefits, including helping to slow the
spread of infectious disease and to improve the dissemination of accurate
information. However, to support both ethical technology deployment and user
adoption, privacy must be at the forefront. With the loss of privacy being a
critical threat, thorough threat modeling will help us to strategize and
protect privacy as digital contact tracing technologies advance. Various threat
modeling frameworks exist today, such as LINDDUN, STRIDE, PASTA, and NIST,
which focus on software system privacy, system security, application security,
and data-centric risk, respectively. When applied to the exposure notification
system (ENS) context, these models provide a thorough view of the software side
but fall short in addressing the integrated nature of hardware, humans,
regulations, and software involved in such systems. Our approach addresses
ENSsas a whole and provides a model that addresses the privacy complexities of
a multi-faceted solution. We define privacy principles, privacy threats,
attacker capabilities, and a comprehensive threat model. Finally, we outline
threat mitigation strategies that address the various threats defined in our
model
Related papers
- PILLAR: an AI-Powered Privacy Threat Modeling Tool [2.2366638308792735]
PILLAR is a new tool that integrates Large Language Models with the LINDDUN framework to streamline and enhance privacy threat modeling.
PILLAR automates key parts of the LINDDUN process, such as generating DFDs, classifying threats, and prioritizing risks.
arXiv Detail & Related papers (2024-10-11T12:13:03Z) - Unraveling Privacy Threat Modeling Complexity: Conceptual Privacy Analysis Layers [0.7918886297003017]
Analyzing privacy threats in software products is an essential part of software development to ensure systems are privacy-respecting.
We propose to use four conceptual layers (feature, ecosystem, business context, and environment) to capture this privacy complexity.
These layers can be used as a frame to structure and specify the privacy analysis support in a more tangible and actionable way.
arXiv Detail & Related papers (2024-08-07T06:30:20Z) - SeCTIS: A Framework to Secure CTI Sharing [13.251593345960265]
The rise of IT-dependent operations in modern organizations has heightened their vulnerability to cyberattacks.
Current information-sharing methods lack privacy safeguards, leaving organizations vulnerable to leaks of both proprietary and confidential data.
We design a novel framework called SeCTIS (Secure Cyber Threat Intelligence Sharing) to enable businesses to collaborate, preserving the privacy of their CTI data.
arXiv Detail & Related papers (2024-06-20T08:34:50Z) - The MESA Security Model 2.0: A Dynamic Framework for Mitigating Stealth Data Exfiltration [0.0]
Stealth Data Exfiltration is a significant cyber threat characterized by covert infiltration, extended undetectability, and unauthorized dissemination of confidential data.
Our findings reveal that conventional defense-in-depth strategies often fall short in combating these sophisticated threats.
As we navigate this complex landscape, it is crucial to anticipate potential threats and continually update our defenses.
arXiv Detail & Related papers (2024-05-17T16:14:45Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Privacy-preserving Optics for Enhancing Protection in Face De-identification [60.110274007388135]
We propose a hardware-level face de-identification method to solve this vulnerability.
We also propose an anonymization framework that generates a new face using the privacy-preserving image, face heatmap, and a reference face image from a public dataset as input.
arXiv Detail & Related papers (2024-03-31T19:28:04Z) - Privacy Engineering in Smart Home (SH) Systems: A Comprehensive Privacy Threat Analysis and Risk Management Approach [1.0650780147044159]
This study aims to elucidate the main threats to privacy, associated risks, and effective prioritization of privacy control in SH systems.
The outcomes of this study are expected to benefit SH stakeholders, including vendors, cloud providers, users, researchers, and regulatory bodies in the SH systems domain.
arXiv Detail & Related papers (2024-01-17T17:34:52Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - A System for Automated Open-Source Threat Intelligence Gathering and
Management [53.65687495231605]
SecurityKG is a system for automated OSCTI gathering and management.
It uses a combination of AI and NLP techniques to extract high-fidelity knowledge about threat behaviors.
arXiv Detail & Related papers (2021-01-19T18:31:35Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z) - COVI White Paper [67.04578448931741]
Contact tracing is an essential tool to change the course of the Covid-19 pandemic.
We present an overview of the rationale, design, ethical considerations and privacy strategy of COVI,' a Covid-19 public peer-to-peer contact tracing and risk awareness mobile application developed in Canada.
arXiv Detail & Related papers (2020-05-18T07:40:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.