Neural networks in day-ahead electricity price forecasting: Single vs.
multiple outputs
- URL: http://arxiv.org/abs/2008.08006v1
- Date: Tue, 18 Aug 2020 16:20:31 GMT
- Title: Neural networks in day-ahead electricity price forecasting: Single vs.
multiple outputs
- Authors: Grzegorz Marcjasz, Jesus Lago, Rafa{\l} Weron
- Abstract summary: In electricity price forecasting, neural networks are the most popular machine learning method.
This paper provides a comprehensive comparison of two most common structures when using the deep neural networks.
Results show a significant accuracy advantage of using the latter, confirmed on data from five distinct power exchanges.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advancements in the fields of artificial intelligence and machine
learning methods resulted in a significant increase of their popularity in the
literature, including electricity price forecasting. Said methods cover a very
broad spectrum, from decision trees, through random forests to various
artificial neural network models and hybrid approaches. In electricity price
forecasting, neural networks are the most popular machine learning method as
they provide a non-linear counterpart for well-tested linear regression models.
Their application, however, is not straightforward, with multiple
implementation factors to consider. One of such factors is the network's
structure. This paper provides a comprehensive comparison of two most common
structures when using the deep neural networks -- one that focuses on each hour
of the day separately, and one that reflects the daily auction structure and
models vectors of the prices. The results show a significant accuracy advantage
of using the latter, confirmed on data from five distinct power exchanges.
Related papers
- Hybrid deep additive neural networks [0.0]
We introduce novel deep neural networks that incorporate the idea of additive regression.
Our neural networks share architectural similarities with Kolmogorov-Arnold networks but are based on simpler yet flexible activation and basis functions.
We derive their universal approximation properties and demonstrate their effectiveness through simulation studies and a real-data application.
arXiv Detail & Related papers (2024-11-14T04:26:47Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Enhancing Actuarial Non-Life Pricing Models via Transformers [0.0]
We build on the foundation laid out by the combined actuarial neural network as well as the localGLMnet and enhance those models via the feature tokenizer transformer.
The paper shows that the new methods can achieve better results than the benchmark models while preserving certain generalized linear model advantages.
arXiv Detail & Related papers (2023-11-10T12:06:23Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - The Contextual Lasso: Sparse Linear Models via Deep Neural Networks [5.607237982617641]
We develop a new statistical estimator that fits a sparse linear model to the explanatory features such that the sparsity pattern and coefficients vary as a function of the contextual features.
An extensive suite of experiments on real and synthetic data suggests that the learned models, which remain highly transparent, can be sparser than the regular lasso.
arXiv Detail & Related papers (2023-02-02T05:00:29Z) - Neural Capacitance: A New Perspective of Neural Network Selection via
Edge Dynamics [85.31710759801705]
Current practice requires expensive computational costs in model training for performance prediction.
We propose a novel framework for neural network selection by analyzing the governing dynamics over synaptic connections (edges) during training.
Our framework is built on the fact that back-propagation during neural network training is equivalent to the dynamical evolution of synaptic connections.
arXiv Detail & Related papers (2022-01-11T20:53:15Z) - Bilinear Input Normalization for Neural Networks in Financial
Forecasting [101.89872650510074]
We propose a novel data-driven normalization method for deep neural networks that handle high-frequency financial time-series.
The proposed normalization scheme takes into account the bimodal characteristic of financial time-series.
Our experiments, conducted with state-of-the-arts neural networks and high-frequency data, show significant improvements over other normalization techniques.
arXiv Detail & Related papers (2021-09-01T07:52:03Z) - Creating Powerful and Interpretable Models withRegression Networks [2.2049183478692584]
We propose a novel architecture, Regression Networks, which combines the power of neural networks with the understandability of regression analysis.
We demonstrate that the models exceed the state-of-the-art performance of interpretable models on several benchmark datasets.
arXiv Detail & Related papers (2021-07-30T03:37:00Z) - Towards an Understanding of Benign Overfitting in Neural Networks [104.2956323934544]
Modern machine learning models often employ a huge number of parameters and are typically optimized to have zero training loss.
We examine how these benign overfitting phenomena occur in a two-layer neural network setting.
We show that it is possible for the two-layer ReLU network interpolator to achieve a near minimax-optimal learning rate.
arXiv Detail & Related papers (2021-06-06T19:08:53Z) - Measurement error models: from nonparametric methods to deep neural
networks [3.1798318618973362]
We propose an efficient neural network design for estimating measurement error models.
We use a fully connected feed-forward neural network to approximate the regression function $f(x)$.
We conduct an extensive numerical study to compare the neural network approach with classical nonparametric methods.
arXiv Detail & Related papers (2020-07-15T06:05:37Z) - Learning from Failure: Training Debiased Classifier from Biased
Classifier [76.52804102765931]
We show that neural networks learn to rely on spurious correlation only when it is "easier" to learn than the desired knowledge.
We propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.
Our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets.
arXiv Detail & Related papers (2020-07-06T07:20:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.