Short-Term Load Forecasting using Bi-directional Sequential Models and
Feature Engineering for Small Datasets
- URL: http://arxiv.org/abs/2011.14137v1
- Date: Sat, 28 Nov 2020 14:11:35 GMT
- Title: Short-Term Load Forecasting using Bi-directional Sequential Models and
Feature Engineering for Small Datasets
- Authors: Abdul Wahab, Muhammad Anas Tahir, Naveed Iqbal, Faisal Shafait, Syed
Muhammad Raza Kazmi
- Abstract summary: This paper presents a deep learning architecture for short-term load forecasting based on bidirectional sequential models.
In the proposed architecture, the raw input and hand-crafted features are trained at separate levels and then their respective outputs are combined to make the final prediction.
The efficacy of the proposed methodology is evaluated on datasets from five countries with completely different patterns.
- Score: 6.619735628398446
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Electricity load forecasting enables the grid operators to optimally
implement the smart grid's most essential features such as demand response and
energy efficiency. Electricity demand profiles can vary drastically from one
region to another on diurnal, seasonal and yearly scale. Hence to devise a load
forecasting technique that can yield the best estimates on diverse datasets,
specially when the training data is limited, is a big challenge. This paper
presents a deep learning architecture for short-term load forecasting based on
bidirectional sequential models in conjunction with feature engineering that
extracts the hand-crafted derived features in order to aid the model for better
learning and predictions. In the proposed architecture, named as Deep Derived
Feature Fusion (DeepDeFF), the raw input and hand-crafted features are trained
at separate levels and then their respective outputs are combined to make the
final prediction. The efficacy of the proposed methodology is evaluated on
datasets from five countries with completely different patterns. The results
demonstrate that the proposed technique is superior to the existing state of
the art.
Related papers
- SMPLest-X: Ultimate Scaling for Expressive Human Pose and Shape Estimation [81.36747103102459]
Expressive human pose and shape estimation (EHPS) unifies body, hands, and face motion capture with numerous applications.
Current state-of-the-art methods focus on training innovative architectural designs on confined datasets.
We investigate the impact of scaling up EHPS towards a family of generalist foundation models.
arXiv Detail & Related papers (2025-01-16T18:59:46Z) - PowerMamba: A Deep State Space Model and Comprehensive Benchmark for Time Series Prediction in Electric Power Systems [6.516425351601512]
Time series prediction models are needed for closing the gap between the forecasted and actual grid outcomes.
We introduce a multivariate time series prediction model that combines traditional state space models with deep learning methods.
We release an extended dataset spanning five years of load, electricity price, ancillary service price, and renewable generation.
arXiv Detail & Related papers (2024-12-09T00:23:34Z) - Transfer Learning for Deep Learning-based Prediction of Lattice Thermal Conductivity [0.0]
We study the impact of transfer learning on the precision and generalizability of a deep learning model (ParAIsite)
We show that a much greater improvement is obtained when first fine-tuning it on a large datasets of low-quality approximations of lattice thermal conductivity (LTC)
The promising results pave the way towards a greater ability to explore large databases in search of low thermal conductivity materials.
arXiv Detail & Related papers (2024-11-27T11:57:58Z) - Exploring the design space of deep-learning-based weather forecasting systems [56.129148006412855]
This paper systematically analyzes the impact of different design choices on deep-learning-based weather forecasting systems.
We study fixed-grid architectures such as UNet, fully convolutional architectures, and transformer-based models.
We propose a hybrid system that combines the strong performance of fixed-grid models with the flexibility of grid-invariant architectures.
arXiv Detail & Related papers (2024-10-09T22:25:50Z) - Generative Pretrained Hierarchical Transformer for Time Series Forecasting [3.739587363053192]
We propose a novel generative pretrained hierarchical transformer architecture for forecasting, named textbfGPHT.
We conduct sufficient experiments on eight datasets with mainstream self-supervised pretraining models and supervised models.
The results demonstrated that GPHT surpasses the baseline models across various fine-tuning and zero/few-shot learning settings in the traditional long-term forecasting task.
arXiv Detail & Related papers (2024-02-26T11:54:54Z) - LESS: Selecting Influential Data for Targeted Instruction Tuning [64.78894228923619]
We propose LESS, an efficient algorithm to estimate data influences and perform Low-rank gradiEnt Similarity Search for instruction data selection.
We show that training on a LESS-selected 5% of the data can often outperform training on the full dataset across diverse downstream tasks.
Our method goes beyond surface form cues to identify data that the necessary reasoning skills for the intended downstream application.
arXiv Detail & Related papers (2024-02-06T19:18:04Z) - When Parameter-efficient Tuning Meets General-purpose Vision-language
Models [65.19127815275307]
PETAL revolutionizes the training process by requiring only 0.5% of the total parameters, achieved through a unique mode approximation technique.
Our experiments reveal that PETAL not only outperforms current state-of-the-art methods in most scenarios but also surpasses full fine-tuning models in effectiveness.
arXiv Detail & Related papers (2023-12-16T17:13:08Z) - Pre-training on Synthetic Driving Data for Trajectory Prediction [61.520225216107306]
We propose a pipeline-level solution to mitigate the issue of data scarcity in trajectory forecasting.
We adopt HD map augmentation and trajectory synthesis for generating driving data, and then we learn representations by pre-training on them.
We conduct extensive experiments to demonstrate the effectiveness of our data expansion and pre-training strategies.
arXiv Detail & Related papers (2023-09-18T19:49:22Z) - Inductive biases in deep learning models for weather prediction [17.061163980363492]
We review and analyse the inductive biases of state-of-the-art deep learning-based weather prediction models.
We identify the most important inductive biases and highlight potential avenues towards more efficient and probabilistic DLWP models.
arXiv Detail & Related papers (2023-04-06T14:15:46Z) - Multistep Multiappliance Load Prediction [0.0]
We develop a robust and accurate model for the appliance-level load prediction based on four datasets from four different regions.
The empirical results show that cyclical encoding of time features and weather indicators alongside a long-short term memory (LSTM) model offer the optimal performance.
arXiv Detail & Related papers (2022-12-19T13:01:51Z) - DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language
Models [152.29364079385635]
As pre-trained models grow bigger, the fine-tuning process can be time-consuming and computationally expensive.
We propose a framework for resource- and parameter-efficient fine-tuning by leveraging the sparsity prior in both weight updates and the final model weights.
Our proposed framework, dubbed Dually Sparsity-Embedded Efficient Tuning (DSEE), aims to achieve two key objectives: (i) parameter efficient fine-tuning and (ii) resource-efficient inference.
arXiv Detail & Related papers (2021-10-30T03:29:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.