Ensemble Forecasting of Monthly Electricity Demand using Pattern
Similarity-based Methods
- URL: http://arxiv.org/abs/2004.00426v1
- Date: Sun, 29 Mar 2020 17:26:58 GMT
- Title: Ensemble Forecasting of Monthly Electricity Demand using Pattern
Similarity-based Methods
- Authors: Pawe{\l} Pe{\l}ka, Grzegorz Dudek
- Abstract summary: This work presents ensemble forecasting of monthly electricity demand using pattern similarity-based forecasting methods (PSFMs)
PSFMs applied in this study include $k$-nearest neighbor model, fuzzy neighborhood model, kernel regression model, and general regression neural network.
An empirical illustration applies the ensemble models as well as individual PSFMs for comparison to the monthly electricity demand forecasting for 35 European countries.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work presents ensemble forecasting of monthly electricity demand using
pattern similarity-based forecasting methods (PSFMs). PSFMs applied in this
study include $k$-nearest neighbor model, fuzzy neighborhood model, kernel
regression model, and general regression neural network. An integral part of
PSFMs is a time series representation using patterns of time series sequences.
Pattern representation ensures the input and output data unification through
filtering a trend and equalizing variance. Two types of ensembles are created:
heterogeneous and homogeneous. The former consists of different type base
models, while the latter consists of a single-type base model. Five strategies
are used for controlling a diversity of members in a homogeneous approach. The
diversity is generated using different subsets of training data, different
subsets of features, randomly disrupted input and output variables, and
randomly disrupted model parameters. An empirical illustration applies the
ensemble models as well as individual PSFMs for comparison to the monthly
electricity demand forecasting for 35 European countries.
Related papers
- Identification of Novel Modes in Generative Models via Fourier-based Differential Clustering [33.22153760327227]
We propose a method called Fourier-based Identification of Novel Clusters (FINC) to identify modes produced by a generative model with a higher frequency.
We demonstrate the application of FINC to large-scale computer vision datasets and generative model frameworks.
arXiv Detail & Related papers (2024-05-04T16:06:50Z) - Finite Mixtures of Multivariate Poisson-Log Normal Factor Analyzers for
Clustering Count Data [0.8499685241219366]
A class of eight parsimonious mixture models based on the mixtures of factor analyzers model are introduced.
The proposed models are explored in the context of clustering discrete data arising from RNA sequencing studies.
arXiv Detail & Related papers (2023-11-13T21:23:15Z) - Improving Out-of-Distribution Robustness of Classifiers via Generative
Interpolation [56.620403243640396]
Deep neural networks achieve superior performance for learning from independent and identically distributed (i.i.d.) data.
However, their performance deteriorates significantly when handling out-of-distribution (OoD) data.
We develop a simple yet effective method called Generative Interpolation to fuse generative models trained from multiple domains for synthesizing diverse OoD samples.
arXiv Detail & Related papers (2023-07-23T03:53:53Z) - A Class of Dependent Random Distributions Based on Atom Skipping [2.3258287344692676]
We propose the Plaid Atoms Model (PAM), a novel Bayesian nonparametric model for grouped data.
PAM produces a dependent clustering pattern with overlapping and non-overlapping clusters across groups.
arXiv Detail & Related papers (2023-04-28T16:18:43Z) - Learning from aggregated data with a maximum entropy model [73.63512438583375]
We show how a new model, similar to a logistic regression, may be learned from aggregated data only by approximating the unobserved feature distribution with a maximum entropy hypothesis.
We present empirical evidence on several public datasets that the model learned this way can achieve performances comparable to those of a logistic model trained with the full unaggregated data.
arXiv Detail & Related papers (2022-10-05T09:17:27Z) - Sparse Communication via Mixed Distributions [29.170302047339174]
We build theoretical foundations for "mixed random variables"
Our framework suggests two strategies for representing and sampling mixed random variables.
We experiment with both approaches on an emergent communication benchmark.
arXiv Detail & Related papers (2021-08-05T14:49:03Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Robust Finite Mixture Regression for Heterogeneous Targets [70.19798470463378]
We propose an FMR model that finds sample clusters and jointly models multiple incomplete mixed-type targets simultaneously.
We provide non-asymptotic oracle performance bounds for our model under a high-dimensional learning framework.
The results show that our model can achieve state-of-the-art performance.
arXiv Detail & Related papers (2020-10-12T03:27:07Z) - Model Fusion with Kullback--Leibler Divergence [58.20269014662046]
We propose a method to fuse posterior distributions learned from heterogeneous datasets.
Our algorithm relies on a mean field assumption for both the fused model and the individual dataset posteriors.
arXiv Detail & Related papers (2020-07-13T03:27:45Z) - Evaluating the Disentanglement of Deep Generative Models through
Manifold Topology [66.06153115971732]
We present a method for quantifying disentanglement that only uses the generative model.
We empirically evaluate several state-of-the-art models across multiple datasets.
arXiv Detail & Related papers (2020-06-05T20:54:11Z) - Pattern Similarity-based Machine Learning Methods for Mid-term Load
Forecasting: A Comparative Study [0.0]
We use pattern similarity-based methods for forecasting monthly electricity demand expressing annual seasonality.
An integral part of the models is the time series representation using patterns of time series sequences.
We consider four such models: nearest neighbor model, fuzzy neighborhood model, kernel regression model and general regression neural network.
arXiv Detail & Related papers (2020-03-03T12:14:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.