Preventing Non-intrusive Load Monitoring Privacy Invasion: A Precise Adversarial Attack Scheme for Networked Smart Meters
- URL: http://arxiv.org/abs/2412.16893v1
- Date: Sun, 22 Dec 2024 07:06:46 GMT
- Title: Preventing Non-intrusive Load Monitoring Privacy Invasion: A Precise Adversarial Attack Scheme for Networked Smart Meters
- Authors: Jialing He, Jiacheng Wang, Ning Wang, Shangwei Guo, Liehuang Zhu, Dusit Niyato, Tao Xiang,
- Abstract summary: We propose an innovative scheme based on adversarial attack in this paper.
The scheme effectively prevents NILM models from violating appliance-level privacy, while also ensuring accurate billing calculation for users.
Our solutions exhibit transferability, making the generated perturbation signal from one target model applicable to other diverse NILM models.
- Score: 99.90150979732641
- License:
- Abstract: Smart grid, through networked smart meters employing the non-intrusive load monitoring (NILM) technique, can considerably discern the usage patterns of residential appliances. However, this technique also incurs privacy leakage. To address this issue, we propose an innovative scheme based on adversarial attack in this paper. The scheme effectively prevents NILM models from violating appliance-level privacy, while also ensuring accurate billing calculation for users. To achieve this objective, we overcome two primary challenges. First, as NILM models fall under the category of time-series regression models, direct application of traditional adversarial attacks designed for classification tasks is not feasible. To tackle this issue, we formulate a novel adversarial attack problem tailored specifically for NILM and providing a theoretical foundation for utilizing the Jacobian of the NILM model to generate imperceptible perturbations. Leveraging the Jacobian, our scheme can produce perturbations, which effectively misleads the signal prediction of NILM models to safeguard users' appliance-level privacy. The second challenge pertains to fundamental utility requirements, where existing adversarial attack schemes struggle to achieve accurate billing calculation for users. To handle this problem, we introduce an additional constraint, mandating that the sum of added perturbations within a billing period must be precisely zero. Experimental validation on real-world power datasets REDD and UK-DALE demonstrates the efficacy of our proposed solutions, which can significantly amplify the discrepancy between the output of the targeted NILM model and the actual power signal of appliances, and enable accurate billing at the same time. Additionally, our solutions exhibit transferability, making the generated perturbation signal from one target model applicable to other diverse NILM models.
Related papers
- Information Sifting Funnel: Privacy-preserving Collaborative Inference Against Model Inversion Attacks [9.092229145160763]
complexity of neural networks and inference tasks poses significant challenges for resource-constrained edge devices.
Collaborative inference mitigates this by assigning shallow feature extraction to edge devices and offloading features to the cloud for further inference.
We propose SiftFunnel, a privacy-preserving framework for collaborative inference.
arXiv Detail & Related papers (2025-01-01T13:00:01Z) - Benchmarking Active Learning for NILM [2.896640219222859]
Non-intrusive load monitoring (NILM) focuses on disaggregating total household power consumption into appliance-specific usage.
Many advanced NILM methods are based on neural networks that typically require substantial amounts of labeled appliance data.
We propose an active learning approach to selectively install appliance monitors in a limited number of houses.
arXiv Detail & Related papers (2024-11-24T12:22:59Z) - Transferable Adversarial Attacks on SAM and Its Downstream Models [87.23908485521439]
This paper explores the feasibility of adversarial attacking various downstream models fine-tuned from the segment anything model (SAM)
To enhance the effectiveness of the adversarial attack towards models fine-tuned on unknown datasets, we propose a universal meta-initialization (UMI) algorithm.
arXiv Detail & Related papers (2024-10-26T15:04:04Z) - Federated Sequence-to-Sequence Learning for Load Disaggregation from Unbalanced Low-Resolution Smart Meter Data [5.460776507522276]
Non-Intrusive Load Monitoring (NILM) can enhance energy awareness and provide valuable insights for energy program design.
Existing NILM methods often rely on specialized devices to retrieve high-sampling complex signal data.
We propose a new approach using easily accessible weather data to achieve load disaggregation for a total of 12 appliances.
arXiv Detail & Related papers (2024-08-15T13:04:49Z) - Noisy Neighbors: Efficient membership inference attacks against LLMs [2.666596421430287]
This paper introduces an efficient methodology that generates textitnoisy neighbors for a target sample by adding noise in the embedding space.
Our findings demonstrate that this approach closely matches the effectiveness of employing shadow models, showing its usability in practical privacy auditing scenarios.
arXiv Detail & Related papers (2024-06-24T12:02:20Z) - RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content [62.685566387625975]
Current mitigation strategies, while effective, are not resilient under adversarial attacks.
This paper introduces Resilient Guardrails for Large Language Models (RigorLLM), a novel framework designed to efficiently moderate harmful and unsafe inputs.
arXiv Detail & Related papers (2024-03-19T07:25:02Z) - Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - On the Sensitivity of Deep Load Disaggregation to Adversarial Attacks [2.389598109913753]
Adversarial attacks have proven to be a significant threat in domains such as computer vision and speech recognition.
We investigate the Fast Gradient Sign Method (FGSM) to perturb the input sequences fed into two commonly employed CNN-based NILM baselines.
Our findings provide compelling evidence for the vulnerability of these models, particularly the S2P model which exhibits an average decline of 20% in the F1-score.
arXiv Detail & Related papers (2023-07-14T13:10:01Z) - Reversible Quantization Index Modulation for Static Deep Neural Network
Watermarking [57.96787187733302]
Reversible data hiding (RDH) methods offer a potential solution, but existing approaches suffer from weaknesses in terms of usability, capacity, and fidelity.
We propose a novel RDH-based static DNN watermarking scheme using quantization index modulation (QIM)
Our scheme incorporates a novel approach based on a one-dimensional quantizer for watermark embedding.
arXiv Detail & Related papers (2023-05-29T04:39:17Z) - Mixture GAN For Modulation Classification Resiliency Against Adversarial
Attacks [55.92475932732775]
We propose a novel generative adversarial network (GAN)-based countermeasure approach.
GAN-based aims to eliminate the adversarial attack examples before feeding to the DNN-based classifier.
Simulation results show the effectiveness of our proposed defense GAN so that it could enhance the accuracy of the DNN-based AMC under adversarial attacks to 81%, approximately.
arXiv Detail & Related papers (2022-05-29T22:30:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.