A Blackbox Model Is All You Need to Breach Privacy: Smart Grid
Forecasting Models as a Use Case
- URL: http://arxiv.org/abs/2309.01523v1
- Date: Mon, 4 Sep 2023 11:07:37 GMT
- Title: A Blackbox Model Is All You Need to Breach Privacy: Smart Grid
Forecasting Models as a Use Case
- Authors: Hussein Aly, Abdulaziz Al-Ali, Abdullah Al-Ali, Qutaibah Malluhi
- Abstract summary: We show that a black box access to an LSTM model can reveal a significant amount of information equivalent to having access to the data itself.
This highlights the importance of protecting forecasting models at the same level as the data.
- Score: 0.7714988183435832
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper investigates the potential privacy risks associated with
forecasting models, with specific emphasis on their application in the context
of smart grids. While machine learning and deep learning algorithms offer
valuable utility, concerns arise regarding their exposure of sensitive
information. Previous studies have focused on classification models,
overlooking risks associated with forecasting models. Deep learning based
forecasting models, such as Long Short Term Memory (LSTM), play a crucial role
in several applications including optimizing smart grid systems but also
introduce privacy risks. Our study analyzes the ability of forecasting models
to leak global properties and privacy threats in smart grid systems. We
demonstrate that a black box access to an LSTM model can reveal a significant
amount of information equivalent to having access to the data itself (with the
difference being as low as 1% in Area Under the ROC Curve). This highlights the
importance of protecting forecasting models at the same level as the data.
Related papers
- Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Whispers in the Machine: Confidentiality in LLM-integrated Systems [7.893457690926516]
Large Language Models (LLMs) are increasingly augmented with external tools and commercial services into LLM-integrated systems.
Manipulated integrations can exploit the model and compromise sensitive data accessed through other interfaces.
We introduce a systematic approach to evaluate confidentiality risks in LLM-integrated systems.
arXiv Detail & Related papers (2024-02-10T11:07:24Z) - Privacy and Security Implications of Cloud-Based AI Services : A Survey [4.1964397179107085]
This paper details the privacy and security landscape in today's cloud ecosystem.
It identifies that there is a gap in addressing the risks introduced by machine learning models.
arXiv Detail & Related papers (2024-01-31T13:30:20Z) - Privacy-Preserving Load Forecasting via Personalized Model Obfuscation [4.420464017266168]
This paper addresses the performance challenges of short-term load forecasting models trained with federated learning on heterogeneous data.
Our proposed algorithm, Privacy Preserving Federated Learning (PPFL), incorporates personalization layers for localized training at each smart meter.
arXiv Detail & Related papers (2023-11-21T03:03:10Z) - An Interpretable Systematic Review of Machine Learning Models for
Predictive Maintenance of Aircraft Engine [0.12289361708127873]
This paper presents an interpretable review of various machine learning and deep learning models to predict the maintenance of aircraft engine.
In this study, sensor data is utilized to predict aircraft engine failure within a predetermined number of cycles using LSTM, Bi-LSTM, RNN, Bi-RNN GRU, Random Forest, KNN, Naive Bayes, and Gradient Boosting.
A lucrative accuracy of 97.8%, 97.14%, and 96.42% are achieved by GRU, Bi-LSTM, and LSTM respectively.
arXiv Detail & Related papers (2023-09-23T08:54:10Z) - Deep networks for system identification: a Survey [56.34005280792013]
System identification learns mathematical descriptions of dynamic systems from input-output data.
Main aim of the identified model is to predict new data from previous observations.
We discuss architectures commonly adopted in the literature, like feedforward, convolutional, and recurrent networks.
arXiv Detail & Related papers (2023-01-30T12:38:31Z) - Synthetic Model Combination: An Instance-wise Approach to Unsupervised
Ensemble Learning [92.89846887298852]
Consider making a prediction over new test data without any opportunity to learn from a training set of labelled data.
Give access to a set of expert models and their predictions alongside some limited information about the dataset used to train them.
arXiv Detail & Related papers (2022-10-11T10:20:31Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Survey: Leakage and Privacy at Inference Time [59.957056214792665]
Leakage of data from publicly available Machine Learning (ML) models is an area of growing significance.
We focus on inference-time leakage, as the most likely scenario for publicly available models.
We propose a taxonomy across involuntary and malevolent leakage, available defences, followed by the currently available assessment metrics and applications.
arXiv Detail & Related papers (2021-07-04T12:59:16Z) - Knowledge-Enriched Distributional Model Inversion Attacks [49.43828150561947]
Model inversion (MI) attacks are aimed at reconstructing training data from model parameters.
We present a novel inversion-specific GAN that can better distill knowledge useful for performing attacks on private models from public data.
Our experiments show that the combination of these techniques can significantly boost the success rate of the state-of-the-art MI attacks by 150%.
arXiv Detail & Related papers (2020-10-08T16:20:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.