Generating Multivariate Load States Using a Conditional Variational
Autoencoder
- URL: http://arxiv.org/abs/2110.11435v1
- Date: Thu, 21 Oct 2021 19:07:04 GMT
- Title: Generating Multivariate Load States Using a Conditional Variational
Autoencoder
- Authors: Chenguang Wang, Ensieh Sharifnia, Zhi Gao, Simon H. Tindemans, Peter
Palensky
- Abstract summary: A conditional variational autoencoder (CVAE) neural network is proposed in this paper.
The model includes latent variation of output samples under given latent vectors and co-optimizes the parameters for this output variability.
Experiments demonstrate that the proposed generator outperforms other data generating mechanisms.
- Score: 11.557259513691239
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For planning of power systems and for the calibration of operational tools,
it is essential to analyse system performance in a large range of
representative scenarios. When the available historical data is limited,
generative models are a promising solution, but modelling high-dimensional
dependencies is challenging. In this paper, a multivariate load state
generating model on the basis of a conditional variational autoencoder (CVAE)
neural network is proposed. Going beyond common CVAE implementations, the model
includes stochastic variation of output samples under given latent vectors and
co-optimizes the parameters for this output variability. It is shown that this
improves statistical properties of the generated data. The quality of generated
multivariate loads is evaluated using univariate and multivariate performance
metrics. A generation adequacy case study on the European network is used to
illustrate model's ability to generate realistic tail distributions. The
experiments demonstrate that the proposed generator outperforms other data
generating mechanisms.
Related papers
- Stable Training of Probabilistic Models Using the Leave-One-Out Maximum Log-Likelihood Objective [0.7373617024876725]
Kernel density estimation (KDE) based models are popular choices for this task, but they fail to adapt to data regions with varying densities.
An adaptive KDE model is employed to circumvent this, where each kernel in the model has an individual bandwidth.
A modified expectation-maximization algorithm is employed to accelerate the optimization speed reliably.
arXiv Detail & Related papers (2023-10-05T14:08:42Z) - Differential Evolution Algorithm based Hyper-Parameters Selection of
Transformer Neural Network Model for Load Forecasting [0.0]
Transformer models have the potential to improve Load forecasting because of their ability to learn long-range dependencies derived from their Attention Mechanism.
Our work compares the proposed Transformer based Neural Network model integrated with different metaheuristic algorithms by their performance in Load forecasting based on numerical metrics such as Mean Squared Error (MSE) and Mean Absolute Percentage Error (MAPE)
arXiv Detail & Related papers (2023-07-28T04:29:53Z) - Improving Out-of-Distribution Robustness of Classifiers via Generative
Interpolation [56.620403243640396]
Deep neural networks achieve superior performance for learning from independent and identically distributed (i.i.d.) data.
However, their performance deteriorates significantly when handling out-of-distribution (OoD) data.
We develop a simple yet effective method called Generative Interpolation to fuse generative models trained from multiple domains for synthesizing diverse OoD samples.
arXiv Detail & Related papers (2023-07-23T03:53:53Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Distributional Learning of Variational AutoEncoder: Application to
Synthetic Data Generation [0.7614628596146602]
We propose a new approach that expands the model capacity without sacrificing the computational advantages of the VAE framework.
Our VAE model's decoder is composed of an infinite mixture of asymmetric Laplace distribution.
We apply the proposed model to synthetic data generation, and particularly, our model demonstrates superiority in easily adjusting the level of data privacy.
arXiv Detail & Related papers (2023-02-22T11:26:50Z) - Transformer-based conditional generative adversarial network for
multivariate time series generation [0.0]
Conditional generation of time-dependent data is a task that has much interest.
Recent works proposed a Transformer-based Time series generative adversarial network (TTS-GAN)
We extend the TTS-GAN by conditioning its generated output on a particular encoded context.
We show that this transformer-based CGAN can generate realistic high-dimensional and long data sequences under different kinds of conditions.
arXiv Detail & Related papers (2022-10-05T08:29:33Z) - Generating Contextual Load Profiles Using a Conditional Variational
Autoencoder [3.9275220564515743]
We describe a generative model for load profiles of industrial and commercial customers, based on the conditional variational autoencoder (CVAE) neural network architecture.
The experimental results demonstrate our proposed CVAE model can capture temporal features of historical load profiles and generate realistic' data.
arXiv Detail & Related papers (2022-09-08T23:12:54Z) - Equivariant vector field network for many-body system modeling [65.22203086172019]
Equivariant Vector Field Network (EVFN) is built on a novel equivariant basis and the associated scalarization and vectorization layers.
We evaluate our method on predicting trajectories of simulated Newton mechanics systems with both full and partially observed data.
arXiv Detail & Related papers (2021-10-26T14:26:25Z) - How Faithful is your Synthetic Data? Sample-level Metrics for Evaluating
and Auditing Generative Models [95.8037674226622]
We introduce a 3-dimensional evaluation metric that characterizes the fidelity, diversity and generalization performance of any generative model in a domain-agnostic fashion.
Our metric unifies statistical divergence measures with precision-recall analysis, enabling sample- and distribution-level diagnoses of model fidelity and diversity.
arXiv Detail & Related papers (2021-02-17T18:25:30Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.