Privacy Engineering in Smart Home (SH) Systems: A Comprehensive Privacy Threat Analysis and Risk Management Approach
- URL: http://arxiv.org/abs/2401.09519v1
- Date: Wed, 17 Jan 2024 17:34:52 GMT
- Title: Privacy Engineering in Smart Home (SH) Systems: A Comprehensive Privacy Threat Analysis and Risk Management Approach
- Authors: Emmanuel Dare Alalade, Mohammed Mahyoub, Ashraf Matrawy,
- Abstract summary: This study aims to elucidate the main threats to privacy, associated risks, and effective prioritization of privacy control in SH systems.
The outcomes of this study are expected to benefit SH stakeholders, including vendors, cloud providers, users, researchers, and regulatory bodies in the SH systems domain.
- Score: 1.0650780147044159
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Addressing trust concerns in Smart Home (SH) systems is imperative due to the limited study on preservation approaches that focus on analyzing and evaluating privacy threats for effective risk management. While most research focuses primarily on user privacy, device data privacy, especially identity privacy, is almost neglected, which can significantly impact overall user privacy within the SH system. To this end, our study incorporates privacy engineering (PE) principles in the SH system that consider user and device data privacy. We start with a comprehensive reference model for a typical SH system. Based on the initial stage of LINDDUN PRO for the PE framework, we present a data flow diagram (DFD) based on a typical SH reference model to better understand SH system operations. To identify potential areas of privacy threat and perform a privacy threat analysis (PTA), we employ the LINDDUN PRO threat model. Then, a privacy impact assessment (PIA) was carried out to implement privacy risk management by prioritizing privacy threats based on their likelihood of occurrence and potential consequences. Finally, we suggest possible privacy enhancement techniques (PETs) that can mitigate some of these threats. The study aims to elucidate the main threats to privacy, associated risks, and effective prioritization of privacy control in SH systems. The outcomes of this study are expected to benefit SH stakeholders, including vendors, cloud providers, users, researchers, and regulatory bodies in the SH systems domain.
Related papers
- Differentially Private Integrated Decision Gradients (IDG-DP) for Radar-based Human Activity Recognition [5.955900146668931]
Recent research has shown high accuracy in recognizing subjects or gender from radar gait patterns, raising privacy concerns.
This study addresses these issues by investigating privacy vulnerabilities in radar-based Human Activity Recognition (HAR) systems.
We propose a novel method for privacy preservation using Differential Privacy (DP) driven by attributions derived with Integrated Decision Gradient (IDG) algorithm.
arXiv Detail & Related papers (2024-11-04T14:08:26Z) - A Deep Dive into Fairness, Bias, Threats, and Privacy in Recommender Systems: Insights and Future Research [45.86892639035389]
This study explores fairness, bias, threats, and privacy in recommender systems.
It examines how algorithmic decisions can unintentionally reinforce biases or marginalize specific user and item groups.
The study suggests future research directions to improve recommender systems' robustness, fairness, and privacy.
arXiv Detail & Related papers (2024-09-19T11:00:35Z) - Preserving Privacy in Large Language Models: A Survey on Current Threats and Solutions [12.451936012379319]
Large Language Models (LLMs) represent a significant advancement in artificial intelligence, finding applications across various domains.
Their reliance on massive internet-sourced datasets for training brings notable privacy issues.
Certain application-specific scenarios may require fine-tuning these models on private data.
arXiv Detail & Related papers (2024-08-10T05:41:19Z) - Privacy Risks of General-Purpose AI Systems: A Foundation for Investigating Practitioner Perspectives [47.17703009473386]
Powerful AI models have led to impressive leaps in performance across a wide range of tasks.
Privacy concerns have led to a wealth of literature covering various privacy risks and vulnerabilities of AI models.
We conduct a systematic review of these survey papers to provide a concise and usable overview of privacy risks in GPAIS.
arXiv Detail & Related papers (2024-07-02T07:49:48Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - Security and Privacy on Generative Data in AIGC: A Survey [17.456578314457612]
We review the security and privacy on generative data in AIGC.
We reveal the successful experiences of state-of-the-art countermeasures in terms of the foundational properties of privacy, controllability, authenticity, and compliance.
arXiv Detail & Related papers (2023-09-18T02:35:24Z) - Advancing Differential Privacy: Where We Are Now and Future Directions for Real-World Deployment [100.1798289103163]
We present a detailed review of current practices and state-of-the-art methodologies in the field of differential privacy (DP)
Key points and high-level contents of the article were originated from the discussions from "Differential Privacy (DP): Challenges Towards the Next Frontier"
This article aims to provide a reference point for the algorithmic and design decisions within the realm of privacy, highlighting important challenges and potential research directions.
arXiv Detail & Related papers (2023-04-14T05:29:18Z) - PCAL: A Privacy-preserving Intelligent Credit Risk Modeling Framework
Based on Adversarial Learning [111.19576084222345]
This paper proposes a framework of Privacy-preserving Credit risk modeling based on Adversarial Learning (PCAL)
PCAL aims to mask the private information inside the original dataset, while maintaining the important utility information for the target prediction task performance.
Results indicate that PCAL can learn an effective, privacy-free representation from user data, providing a solid foundation towards privacy-preserving machine learning for credit risk analysis.
arXiv Detail & Related papers (2020-10-06T07:04:59Z) - Target Privacy Threat Modeling for COVID-19 Exposure Notification
Systems [8.080564346335542]
Digital contact tracing (DCT) technology has helped to slow the spread of infectious disease.
To support both ethical technology deployment and user adoption, privacy must be at the forefront.
With the loss of privacy being a critical threat, thorough threat modeling will help us to strategize and protect privacy as DCT technologies advance.
arXiv Detail & Related papers (2020-09-25T02:09:51Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.