Benchmarking Active Learning for NILM
- URL: http://arxiv.org/abs/2411.15805v1
- Date: Sun, 24 Nov 2024 12:22:59 GMT
- Title: Benchmarking Active Learning for NILM
- Authors: Dhruv Patel, Ankita Kumari Jain, Haikoo Khandor, Xhitij Choudhary, Nipun Batra,
- Abstract summary: Non-intrusive load monitoring (NILM) focuses on disaggregating total household power consumption into appliance-specific usage.
Many advanced NILM methods are based on neural networks that typically require substantial amounts of labeled appliance data.
We propose an active learning approach to selectively install appliance monitors in a limited number of houses.
- Score: 2.896640219222859
- License:
- Abstract: Non-intrusive load monitoring (NILM) focuses on disaggregating total household power consumption into appliance-specific usage. Many advanced NILM methods are based on neural networks that typically require substantial amounts of labeled appliance data, which can be challenging and costly to collect in real-world settings. We hypothesize that appliance data from all households does not uniformly contribute to NILM model improvements. Thus, we propose an active learning approach to selectively install appliance monitors in a limited number of houses. This work is the first to benchmark the use of active learning for strategically selecting appliance-level data to optimize NILM performance. We first develop uncertainty-aware neural networks for NILM and then install sensors in homes where disaggregation uncertainty is highest. Benchmarking our method on the publicly available Pecan Street Dataport dataset, we demonstrate that our approach significantly outperforms a standard random baseline and achieves performance comparable to models trained on the entire dataset. Using this approach, we achieve comparable NILM accuracy with approximately 30% of the data, and for a fixed number of sensors, we observe up to a 2x reduction in disaggregation errors compared to random sampling.
Related papers
- Reward-Augmented Data Enhances Direct Preference Alignment of LLMs [56.24431208419858]
We introduce reward-conditioned Large Language Models (LLMs) that learn from the entire spectrum of response quality within the dataset.
We propose an effective yet simple data relabeling method that conditions the preference pairs on quality scores to construct a reward-augmented dataset.
arXiv Detail & Related papers (2024-10-10T16:01:51Z) - Federated Sequence-to-Sequence Learning for Load Disaggregation from Unbalanced Low-Resolution Smart Meter Data [5.460776507522276]
Non-Intrusive Load Monitoring (NILM) can enhance energy awareness and provide valuable insights for energy program design.
Existing NILM methods often rely on specialized devices to retrieve high-sampling complex signal data.
We propose a new approach using easily accessible weather data to achieve load disaggregation for a total of 12 appliances.
arXiv Detail & Related papers (2024-08-15T13:04:49Z) - Uncertainty Aware Learning for Language Model Alignment [97.36361196793929]
We propose uncertainty-aware learning (UAL) to improve the model alignment of different task scenarios.
We implement UAL in a simple fashion -- adaptively setting the label smoothing value of training according to the uncertainty of individual samples.
Experiments on widely used benchmarks demonstrate that our UAL significantly and consistently outperforms standard supervised fine-tuning.
arXiv Detail & Related papers (2024-06-07T11:37:45Z) - Your Vision-Language Model Itself Is a Strong Filter: Towards
High-Quality Instruction Tuning with Data Selection [59.11430077029321]
We introduce a novel dataset selection method, Self-Filter, for vision-language models (VLMs)
In the first stage, we devise a scoring network to evaluate the difficulty of training instructions, which is co-trained with the VLM.
In the second stage, we use the trained score net to measure the difficulty of each instruction, select the most challenging samples, and penalize similar samples to encourage diversity.
arXiv Detail & Related papers (2024-02-19T20:08:48Z) - How to Train Data-Efficient LLMs [56.41105687693619]
We study data-efficient approaches for pre-training language models (LLMs)
We find that Ask-LLM and Density sampling are the best methods in their respective categories.
In our comparison of 19 samplers, involving hundreds of evaluation tasks and pre-training runs, we find that Ask-LLM and Density are the best methods in their respective categories.
arXiv Detail & Related papers (2024-02-15T02:27:57Z) - Minimally Supervised Learning using Topological Projections in
Self-Organizing Maps [55.31182147885694]
We introduce a semi-supervised learning approach based on topological projections in self-organizing maps (SOMs)
Our proposed method first trains SOMs on unlabeled data and then a minimal number of available labeled data points are assigned to key best matching units (BMU)
Our results indicate that the proposed minimally supervised model significantly outperforms traditional regression techniques.
arXiv Detail & Related papers (2024-01-12T22:51:48Z) - Non-Intrusive Load Monitoring (NILM) using Deep Neural Networks: A
Review [0.0]
Non-intrusive load monitoring (NILM) is a method for decomposing the total energy consumption profile into individual appliance load profiles.
Various methods, including machine learning and deep learning, have been used to implement and improve NILM algorithms.
This paper reviews some recent NILM methods based on deep learning and introduces the most accurate methods for residential loads.
arXiv Detail & Related papers (2023-06-08T08:11:21Z) - Learning Task-Aware Energy Disaggregation: a Federated Approach [1.52292571922932]
Non-intrusive load monitoring (NILM) aims to find individual devices' power consumption profiles based on aggregated meter measurements.
Yet collecting such residential load datasets require both huge efforts and customers' approval on sharing metering data.
We propose a decentralized and task-adaptive learning scheme for NILM tasks, where nested meta learning and federated learning steps are designed for learning task-specific models collectively.
arXiv Detail & Related papers (2022-04-14T05:53:41Z) - Fed-NILM: A Federated Learning-based Non-Intrusive Load Monitoring
Method for Privacy-Protection [0.1657441317977376]
Non-intrusive load monitoring (NILM) decomposes the total load reading into appliance-level load signals.
Deep learning-based methods have been developed to accomplish NILM, and the training of deep neural networks (DNN) requires massive load data containing different types of appliances.
For local data owners with inadequate load data but expect to accomplish a promising model performance, the conduction of effective NILM co-modelling is increasingly significant.
To eliminate the potential risks, a novel NILM method named Fed-NILM ap-plying Federated Learning (FL) is proposed in this paper.
arXiv Detail & Related papers (2021-05-24T04:12:10Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - A Federated Learning Framework for Non-Intrusive Load Monitoring [0.1657441317977376]
Non-intrusive load monitoring (NILM) aims at decomposing the total reading of the household power consumption into appliance-wise ones.
Data cooperation among utilities and DNOs who own the NILM data has been increasingly significant.
A framework to improve the performance of NILM with federated learning (FL) has been set up.
arXiv Detail & Related papers (2021-04-04T14:24:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.