Evaluating Privacy-Preserving Machine Learning in Critical
Infrastructures: A Case Study on Time-Series Classification
- URL: http://arxiv.org/abs/2111.14838v1
- Date: Mon, 29 Nov 2021 12:28:22 GMT
- Title: Evaluating Privacy-Preserving Machine Learning in Critical
Infrastructures: A Case Study on Time-Series Classification
- Authors: Dominique Mercier, Adriano Lucieri, Mohsin Munir, Andreas Dengel and
Sheraz Ahmed
- Abstract summary: It is pivotal to ensure that neither the model nor the data can be used to extract sensitive information.
Various safety-critical use cases (mostly relying on time-series data) are currently underrepresented in privacy-related considerations.
By evaluating several privacy-preserving methods regarding their applicability on time-series data, we validated the inefficacy of encryption for deep learning.
- Score: 5.607917328636864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the advent of machine learning in applications of critical
infrastructure such as healthcare and energy, privacy is a growing concern in
the minds of stakeholders. It is pivotal to ensure that neither the model nor
the data can be used to extract sensitive information used by attackers against
individuals or to harm whole societies through the exploitation of critical
infrastructure. The applicability of machine learning in these domains is
mostly limited due to a lack of trust regarding the transparency and the
privacy constraints. Various safety-critical use cases (mostly relying on
time-series data) are currently underrepresented in privacy-related
considerations. By evaluating several privacy-preserving methods regarding
their applicability on time-series data, we validated the inefficacy of
encryption for deep learning, the strong dataset dependence of differential
privacy, and the broad applicability of federated methods.
Related papers
- Collection, usage and privacy of mobility data in the enterprise and public administrations [55.2480439325792]
Security measures such as anonymization are needed to protect individuals' privacy.
Within our study, we conducted expert interviews to gain insights into practices in the field.
We survey privacy-enhancing methods in use, which generally do not comply with state-of-the-art standards of differential privacy.
arXiv Detail & Related papers (2024-07-04T08:29:27Z) - Using Decentralized Aggregation for Federated Learning with Differential
Privacy [0.32985979395737774]
Federated Learning (FL) provides some level of privacy by retaining the data at the local node.
This research deploys an experimental environment for FL with Differential Privacy (DP) using benchmark datasets.
arXiv Detail & Related papers (2023-11-27T17:02:56Z) - Privacy Preserving Large Language Models: ChatGPT Case Study Based Vision and Framework [6.828884629694705]
This article proposes the conceptual model called PrivChatGPT, a privacy-generative model for LLMs.
PrivChatGPT consists of two main components i.e., preserving user privacy during the data curation/pre-processing together with preserving private context and the private training process for large-scale data.
arXiv Detail & Related papers (2023-10-19T06:55:13Z) - PrivacyMind: Large Language Models Can Be Contextual Privacy Protection Learners [81.571305826793]
We introduce Contextual Privacy Protection Language Models (PrivacyMind)
Our work offers a theoretical analysis for model design and benchmarks various techniques.
In particular, instruction tuning with both positive and negative examples stands out as a promising method.
arXiv Detail & Related papers (2023-10-03T22:37:01Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - Shuffled Differentially Private Federated Learning for Time Series Data
Analytics [10.198481976376717]
We develop a privacy-preserving federated learning algorithm for time series data.
Specifically, we employ local differential privacy to extend the privacy protection trust boundary to the clients.
We also incorporate shuffle techniques to achieve a privacy amplification, mitigating the accuracy decline caused by leveraging local differential privacy.
arXiv Detail & Related papers (2023-07-30T10:30:38Z) - Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining [75.25943383604266]
We question whether the use of large Web-scraped datasets should be viewed as differential-privacy-preserving.
We caution that publicizing these models pretrained on Web data as "private" could lead to harm and erode the public's trust in differential privacy as a meaningful definition of privacy.
We conclude by discussing potential paths forward for the field of private learning, as public pretraining becomes more popular and powerful.
arXiv Detail & Related papers (2022-12-13T10:41:12Z) - Distributed Machine Learning and the Semblance of Trust [66.1227776348216]
Federated Learning (FL) allows the data owner to maintain data governance and perform model training locally without having to share their data.
FL and related techniques are often described as privacy-preserving.
We explain why this term is not appropriate and outline the risks associated with over-reliance on protocols that were not designed with formal definitions of privacy in mind.
arXiv Detail & Related papers (2021-12-21T08:44:05Z) - PCAL: A Privacy-preserving Intelligent Credit Risk Modeling Framework
Based on Adversarial Learning [111.19576084222345]
This paper proposes a framework of Privacy-preserving Credit risk modeling based on Adversarial Learning (PCAL)
PCAL aims to mask the private information inside the original dataset, while maintaining the important utility information for the target prediction task performance.
Results indicate that PCAL can learn an effective, privacy-free representation from user data, providing a solid foundation towards privacy-preserving machine learning for credit risk analysis.
arXiv Detail & Related papers (2020-10-06T07:04:59Z) - SPEED: Secure, PrivatE, and Efficient Deep learning [2.283665431721732]
We introduce a deep learning framework able to deal with strong privacy constraints.
Based on collaborative learning, differential privacy and homomorphic encryption, the proposed approach advances state-of-the-art.
arXiv Detail & Related papers (2020-06-16T19:31:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.