Navigating Out-of-Distribution Electricity Load Forecasting during
COVID-19: Benchmarking energy load forecasting models without and with
continual learning
- URL: http://arxiv.org/abs/2309.04296v3
- Date: Wed, 4 Oct 2023 00:37:16 GMT
- Title: Navigating Out-of-Distribution Electricity Load Forecasting during
COVID-19: Benchmarking energy load forecasting models without and with
continual learning
- Authors: Arian Prabowo, Kaixuan Chen, Hao Xue, Subbu Sethuvenkatraman, Flora D.
Salim
- Abstract summary: This paper employs a two-fold strategy: utilizing continual learning techniques to update models with new data and harnessing human mobility data collected from privacy-preserving pedestrian counters located outside buildings.
Results underscore the crucial role of continual learning in accurate energy forecasting, particularly during Out-of-Distribution periods.
- Score: 10.47725405370935
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In traditional deep learning algorithms, one of the key assumptions is that
the data distribution remains constant during both training and deployment.
However, this assumption becomes problematic when faced with
Out-of-Distribution periods, such as the COVID-19 lockdowns, where the data
distribution significantly deviates from what the model has seen during
training. This paper employs a two-fold strategy: utilizing continual learning
techniques to update models with new data and harnessing human mobility data
collected from privacy-preserving pedestrian counters located outside
buildings. In contrast to online learning, which suffers from 'catastrophic
forgetting' as newly acquired knowledge often erases prior information,
continual learning offers a holistic approach by preserving past insights while
integrating new data. This research applies FSNet, a powerful continual
learning algorithm, to real-world data from 13 building complexes in Melbourne,
Australia, a city which had the second longest total lockdown duration globally
during the pandemic. Results underscore the crucial role of continual learning
in accurate energy forecasting, particularly during Out-of-Distribution
periods. Secondary data such as mobility and temperature provided ancillary
support to the primary forecasting model. More importantly, while traditional
methods struggled to adapt during lockdowns, models featuring at least online
learning demonstrated resilience, with lockdown periods posing fewer challenges
once armed with adaptive learning techniques. This study contributes valuable
methodologies and insights to the ongoing effort to improve energy load
forecasting during future Out-of-Distribution periods.
Related papers
- TEAM: Topological Evolution-aware Framework for Traffic Forecasting--Extended Version [24.544665297938437]
Topological Evolution-aware Framework (TEAM) for traffic forecasting incorporates convolution and attention.
TEAM is capable of much lower re-training costs than existing methods are without jeopardizing forecasting accuracy.
arXiv Detail & Related papers (2024-10-24T22:50:21Z) - Temporal-Difference Variational Continual Learning [89.32940051152782]
A crucial capability of Machine Learning models in real-world applications is the ability to continuously learn new tasks.
In Continual Learning settings, models often struggle to balance learning new tasks with retaining previous knowledge.
We propose new learning objectives that integrate the regularization effects of multiple previous posterior estimations.
arXiv Detail & Related papers (2024-10-10T10:58:41Z) - Continual Learning with Pre-Trained Models: A Survey [61.97613090666247]
Continual Learning aims to overcome the catastrophic forgetting of former knowledge when learning new ones.
This paper presents a comprehensive survey of the latest advancements in PTM-based CL.
arXiv Detail & Related papers (2024-01-29T18:27:52Z) - Temporal Knowledge Distillation for Time-Sensitive Financial Services
Applications [7.1795069620810805]
Anomaly detection is frequently used in key compliance and risk functions such as financial crime detection fraud and cybersecurity.
Keeping up with the rapid changes by retraining the models with the latest data patterns introduces pressures in balancing the historical and current patterns.
The proposed approach provides advantages in retraining times while improving the model performance.
arXiv Detail & Related papers (2023-12-28T03:04:30Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive
Learning [67.07363529640784]
We propose OpenSTL to categorize prevalent approaches into recurrent-based and recurrent-free models.
We conduct standard evaluations on datasets across various domains, including synthetic moving object trajectory, human motion, driving scenes, traffic flow and forecasting weather.
We find that recurrent-free models achieve a good balance between efficiency and performance than recurrent models.
arXiv Detail & Related papers (2023-06-20T03:02:14Z) - Continually learning out-of-distribution spatiotemporal data for robust
energy forecasting [10.47725405370935]
Building energy usage is essential for promoting sustainability and reducing waste.
Forecasting energy usage during anomalous periods is difficult due to changes in occupancy patterns and energy usage behavior.
Online learning has emerged as a promising solution to this challenge.
We have conducted experiments using data from six buildings to test the efficacy of these approaches.
arXiv Detail & Related papers (2023-06-10T09:12:10Z) - Wild-Time: A Benchmark of in-the-Wild Distribution Shift over Time [69.77704012415845]
Temporal shifts can considerably degrade performance of machine learning models deployed in the real world.
We benchmark 13 prior approaches, including methods in domain generalization, continual learning, self-supervised learning, and ensemble learning.
Under both evaluation strategies, we observe an average performance drop of 20% from in-distribution to out-of-distribution data.
arXiv Detail & Related papers (2022-11-25T17:07:53Z) - Building Autocorrelation-Aware Representations for Fine-Scale
Spatiotemporal Prediction [1.2862507359003323]
We present a novel deep learning architecture that incorporates theories of spatial statistics into neural networks.
DeepLATTE contains an autocorrelation-guided semi-supervised learning strategy to enforce both local autocorrelation patterns and global autocorrelation trends.
We conduct a demonstration of DeepLATTE using publicly available data for an important public health topic, air quality prediction in a well-fitting, complex physical environment.
arXiv Detail & Related papers (2021-12-10T03:21:19Z) - PRNet: A Periodic Residual Learning Network for Crowd Flow Forecasting [8.50942649992681]
We devise a novel periodic residual learning network (PRNet) for better modeling the periodicity in crowd flow data.
PRNet frames the crowd flow forecasting as a periodic residual learning problem by modeling the deviation between the input (the previous time period) and the output (the future time period)
Experimental results on two real-world datasets demonstrate that PRNet outperforms the state-of-the-art methods in terms of both accuracy and robustness.
arXiv Detail & Related papers (2021-12-08T12:04:27Z) - Online Continual Learning with Natural Distribution Shifts: An Empirical
Study with Visual Data [101.6195176510611]
"Online" continual learning enables evaluating both information retention and online learning efficacy.
In online continual learning, each incoming small batch of data is first used for testing and then added to the training set, making the problem truly online.
We introduce a new benchmark for online continual visual learning that exhibits large scale and natural distribution shifts.
arXiv Detail & Related papers (2021-08-20T06:17:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.