PTMF: A Privacy Threat Modeling Framework for IoT with Expert-Driven Threat Propagation Analysis
- URL: http://arxiv.org/abs/2510.21601v1
- Date: Fri, 24 Oct 2025 16:06:04 GMT
- Title: PTMF: A Privacy Threat Modeling Framework for IoT with Expert-Driven Threat Propagation Analysis
- Authors: Emmanuel Dare Alalade, Ashraf Matrawy,
- Abstract summary: We present a novel Privacy Threat Model Framework (PTMF) that analyzes privacy threats through different phases.<n>The proposed PTMF can be employed in various ways, including analyzing the activities of threat actors during privacy threats and assessing privacy risks in IoT systems.
- Score: 0.5156484100374058
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Previous studies on PTA have focused on analyzing privacy threats based on the potential areas of occurrence and their likelihood of occurrence. However, an in-depth understanding of the threat actors involved, their actions, and the intentions that result in privacy threats is essential. In this paper, we present a novel Privacy Threat Model Framework (PTMF) that analyzes privacy threats through different phases. The PTMF development is motivated through the selected tactics from the MITRE ATT\&CK framework and techniques from the LINDDUN privacy threat model, making PTMF a privacy-centered framework. The proposed PTMF can be employed in various ways, including analyzing the activities of threat actors during privacy threats and assessing privacy risks in IoT systems, among others. In this paper, we conducted a user study on 12 privacy threats associated with IoT by developing a questionnaire based on PTMF and recruited experts from both industry and academia in the fields of security and privacy to gather their opinions. The collected data were analyzed and mapped to identify the threat actors involved in the identification of IoT users (IU) and the remaining 11 privacy threats. Our observation revealed the top three threat actors and the critical paths they used during the IU privacy threat, as well as the remaining 11 privacy threats. This study could provide a solid foundation for understanding how and where privacy measures can be proactively and effectively deployed in IoT systems to mitigate privacy threats based on the activities and intentions of threat actors within these systems.
Related papers
- Your Privacy Depends on Others: Collusion Vulnerabilities in Individual Differential Privacy [50.66105844449181]
Individual Differential Privacy (iDP) promises users control over their privacy, but this promise can be broken in practice.<n>We reveal a previously overlooked vulnerability in sampling-based iDP mechanisms.<n>We propose $(varepsilon_i,_i,overline)$-iDP a privacy contract that uses $$-divergences to provide users with a hard upper bound on their excess vulnerability.
arXiv Detail & Related papers (2026-01-19T10:26:12Z) - Responsible Diffusion: A Comprehensive Survey on Safety, Ethics, and Trust in Diffusion Models [69.22690439422531]
Diffusion models (DMs) have been investigated in various domains due to their ability to generate high-quality data.<n>Similar to traditional deep learning systems, there also exist potential threats to DMs.<n>This survey comprehensively elucidates its framework, threats, and countermeasures.
arXiv Detail & Related papers (2025-09-25T02:51:43Z) - Zero-Trust Foundation Models: A New Paradigm for Secure and Collaborative Artificial Intelligence for Internet of Things [61.43014629640404]
Zero-Trust Foundation Models (ZTFMs) embed zero-trust security principles into the lifecycle of foundation models (FMs) for Internet of Things (IoT) systems.<n>ZTFMs can enable secure, privacy-preserving AI across distributed, heterogeneous, and potentially adversarial IoT environments.
arXiv Detail & Related papers (2025-05-26T06:44:31Z) - Modeling interdependent privacy threats [0.30693357740321775]
We argue that existing threat modeling approaches are limited in exposing interdependent privacy threats.<n>Our contributions are threefold: (i) we identify IDP-specific challenges and limitations in current threat modeling frameworks, (ii) we create IDPA, a threat modeling approach tailored to IDP threats, and (iii) we validate our approach through a case study on WeChat.
arXiv Detail & Related papers (2025-05-23T21:22:49Z) - A Survey on Privacy Risks and Protection in Large Language Models [13.602836059584682]
Large Language Models (LLMs) have become increasingly integral to diverse applications, raising privacy concerns.<n>This survey offers a comprehensive overview of privacy risks associated with LLMs and examines current solutions to mitigate these challenges.
arXiv Detail & Related papers (2025-05-04T03:04:07Z) - Defining and Evaluating Physical Safety for Large Language Models [62.4971588282174]
Large Language Models (LLMs) are increasingly used to control robotic systems such as drones.
Their risks of causing physical threats and harm in real-world applications remain unexplored.
We classify the physical safety risks of drones into four categories: (1) human-targeted threats, (2) object-targeted threats, (3) infrastructure attacks, and (4) regulatory violations.
arXiv Detail & Related papers (2024-11-04T17:41:25Z) - Privacy Risks of General-Purpose AI Systems: A Foundation for Investigating Practitioner Perspectives [47.17703009473386]
Powerful AI models have led to impressive leaps in performance across a wide range of tasks.
Privacy concerns have led to a wealth of literature covering various privacy risks and vulnerabilities of AI models.
We conduct a systematic review of these survey papers to provide a concise and usable overview of privacy risks in GPAIS.
arXiv Detail & Related papers (2024-07-02T07:49:48Z) - PsySafe: A Comprehensive Framework for Psychological-based Attack, Defense, and Evaluation of Multi-agent System Safety [70.84902425123406]
Multi-agent systems, when enhanced with Large Language Models (LLMs), exhibit profound capabilities in collective intelligence.
However, the potential misuse of this intelligence for malicious purposes presents significant risks.
We propose a framework (PsySafe) grounded in agent psychology, focusing on identifying how dark personality traits in agents can lead to risky behaviors.
Our experiments reveal several intriguing phenomena, such as the collective dangerous behaviors among agents, agents' self-reflection when engaging in dangerous behavior, and the correlation between agents' psychological assessments and dangerous behaviors.
arXiv Detail & Related papers (2024-01-22T12:11:55Z) - Privacy Engineering in Smart Home (SH) Systems: A Comprehensive Privacy Threat Analysis and Risk Management Approach [0.45880283710344066]
This study aims to elucidate the main threats to privacy, associated risks, and effective prioritization of privacy control in SH systems.<n>The outcomes of this study are expected to benefit SH stakeholders, including vendors, cloud providers, users, researchers, and regulatory bodies in the SH systems domain.
arXiv Detail & Related papers (2024-01-17T17:34:52Z) - Privacy in Large Language Models: Attacks, Defenses and Future Directions [84.73301039987128]
We analyze the current privacy attacks targeting large language models (LLMs) and categorize them according to the adversary's assumed capabilities.
We present a detailed overview of prominent defense strategies that have been developed to counter these privacy attacks.
arXiv Detail & Related papers (2023-10-16T13:23:54Z) - Privacy Threats on the Internet of Medical Things [0.0]
The Internet of Medical Things (IoMT) is a frequent target of attacks.
We briefly discuss specific privacy threats and threat actors in IoMT.
We argue that the privacy policy gap needs to be identified for the IoMT threat landscape.
arXiv Detail & Related papers (2022-07-19T23:45:16Z) - Target Privacy Threat Modeling for COVID-19 Exposure Notification
Systems [8.080564346335542]
Digital contact tracing (DCT) technology has helped to slow the spread of infectious disease.
To support both ethical technology deployment and user adoption, privacy must be at the forefront.
With the loss of privacy being a critical threat, thorough threat modeling will help us to strategize and protect privacy as DCT technologies advance.
arXiv Detail & Related papers (2020-09-25T02:09:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.