Clustering Enabled Few-Shot Load Forecasting
- URL: http://arxiv.org/abs/2202.07939v1
- Date: Wed, 16 Feb 2022 09:09:09 GMT
- Title: Clustering Enabled Few-Shot Load Forecasting
- Authors: Qiyuan Wang, Zhihui Chen, Chenye Wu
- Abstract summary: We consider the load forecasting for a new user by observing only few shots (data points) of its energy consumption.
This task is challenging since the limited samples are insufficient to exploit the temporal characteristics.
We propose to utilize the historical load profile data from existing users to conduct effective clustering.
- Score: 2.0810096547938164
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While the advanced machine learning algorithms are effective in load
forecasting, they often suffer from low data utilization, and hence their
superior performance relies on massive datasets. This motivates us to design
machine learning algorithms with improved data utilization. Specifically, we
consider the load forecasting for a new user in the system by observing only
few shots (data points) of its energy consumption. This task is challenging
since the limited samples are insufficient to exploit the temporal
characteristics, essential for load forecasting. Nonetheless, we notice that
there are not too many temporal characteristics for residential loads due to
the limited kinds of human lifestyle. Hence, we propose to utilize the
historical load profile data from existing users to conduct effective
clustering, which mitigates the challenges brought by the limited samples.
Specifically, we first design a feature extraction clustering method for
categorizing historical data. Then, inheriting the prior knowledge from the
clustering results, we propose a two-phase Long Short Term Memory (LSTM) model
to conduct load forecasting for new users. The proposed method outperforms the
traditional LSTM model, especially when the training sample size fails to cover
a whole period (i.e., 24 hours in our task). Extensive case studies on two
real-world datasets and one synthetic dataset verify the effectiveness and
efficiency of our method.
Related papers
- A CLIP-Powered Framework for Robust and Generalizable Data Selection [51.46695086779598]
Real-world datasets often contain redundant and noisy data, imposing a negative impact on training efficiency and model performance.
Data selection has shown promise in identifying the most representative samples from the entire dataset.
We propose a novel CLIP-powered data selection framework that leverages multimodal information for more robust and generalizable sample selection.
arXiv Detail & Related papers (2024-10-15T03:00:58Z) - Few-Shot Load Forecasting Under Data Scarcity in Smart Grids: A Meta-Learning Approach [0.18641315013048293]
This paper proposes adapting an established model-agnostic meta-learning algorithm for short-term load forecasting.
The proposed method can rapidly adapt and generalize within any unknown load time series of arbitrary length.
The proposed model is evaluated using a dataset of historical load consumption data from real-world consumers.
arXiv Detail & Related papers (2024-06-09T18:59:08Z) - Applying Fine-Tuned LLMs for Reducing Data Needs in Load Profile Analysis [9.679453060210978]
This paper presents a novel method for utilizing fine-tuned Large Language Models (LLMs) to minimize data requirements in load profile analysis.
A two-stage fine-tuning strategy is proposed to adapt a pre-trained LLM for missing data restoration tasks.
We demonstrate the effectiveness of the fine-tuned model in accurately restoring missing data, achieving comparable performance to state-of-the-art models such as BERT-PIN.
arXiv Detail & Related papers (2024-06-02T23:18:11Z) - Computationally and Memory-Efficient Robust Predictive Analytics Using Big Data [0.0]
This study navigates through the challenges of data uncertainties, storage limitations, and predictive data-driven modeling using big data.
We utilize Robust Principal Component Analysis (RPCA) for effective noise reduction and outlier elimination, and Optimal Sensor Placement (OSP) for efficient data compression and storage.
arXiv Detail & Related papers (2024-03-27T22:39:08Z) - Pushing the Limits of Pre-training for Time Series Forecasting in the
CloudOps Domain [54.67888148566323]
We introduce three large-scale time series forecasting datasets from the cloud operations domain.
We show it is a strong zero-shot baseline and benefits from further scaling, both in model and dataset size.
Accompanying these datasets and results is a suite of comprehensive benchmark results comparing classical and deep learning baselines to our pre-trained method.
arXiv Detail & Related papers (2023-10-08T08:09:51Z) - Continual Learning in Predictive Autoscaling [17.438074717702726]
Predictive Autoscaling is used to forecast the workloads of servers and prepare resources in advance to ensure service level objectives (SLOs) in dynamic cloud environments.
We propose a replay-based continual learning method, i.e., Density-based Memory Selection and Hint-based Network Learning Model (DMSHM)
Our proposed method outperforms state-of-the-art continual learning methods in terms of memory capacity and prediction accuracy.
arXiv Detail & Related papers (2023-07-29T09:29:09Z) - To Repeat or Not To Repeat: Insights from Scaling LLM under Token-Crisis [50.31589712761807]
Large language models (LLMs) are notoriously token-hungry during pre-training, and high-quality text data on the web is approaching its scaling limit for LLMs.
We investigate the consequences of repeating pre-training data, revealing that the model is susceptible to overfitting.
Second, we examine the key factors contributing to multi-epoch degradation, finding that significant factors include dataset size, model parameters, and training objectives.
arXiv Detail & Related papers (2023-05-22T17:02:15Z) - Re-thinking Data Availablity Attacks Against Deep Neural Networks [53.64624167867274]
In this paper, we re-examine the concept of unlearnable examples and discern that the existing robust error-minimizing noise presents an inaccurate optimization objective.
We introduce a novel optimization paradigm that yields improved protection results with reduced computational time requirements.
arXiv Detail & Related papers (2023-05-18T04:03:51Z) - Prediction-Oriented Bayesian Active Learning [51.426960808684655]
Expected predictive information gain (EPIG) is an acquisition function that measures information gain in the space of predictions rather than parameters.
EPIG leads to stronger predictive performance compared with BALD across a range of datasets and models.
arXiv Detail & Related papers (2023-04-17T10:59:57Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z) - Short-Term Load Forecasting Using AMI Data [0.19573380763700707]
This paper proposes a method called Forecasting using Matrix Factorization (textscfmf) for short-term load forecasting (textscstlf)
textscfmf only utilizes historical data from consumers' smart meters to forecast future loads.
We empirically evaluate textscfmf on three benchmark datasets and demonstrate that it significantly outperforms the state-of-the-art methods in terms of load forecasting.
arXiv Detail & Related papers (2019-12-28T16:07:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.