A Data-Driven Machine Learning Approach for Consumer Modeling with Load
Disaggregation
- URL: http://arxiv.org/abs/2011.03519v1
- Date: Wed, 4 Nov 2020 13:36:11 GMT
- Title: A Data-Driven Machine Learning Approach for Consumer Modeling with Load
Disaggregation
- Authors: A. Khaled Zarabie, Sanjoy Das, and Hongyu Wu
- Abstract summary: We propose a generic class of data-driven semiparametric models derived from consumption data of residential consumers.
In the first stage, disaggregation of the load into fixed and shiftable components is accomplished by means of a hybrid algorithm.
In the second stage, the model parameters are estimated using an L2-norm, epsilon-insensitive regression approach.
- Score: 1.6058099298620423
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While non-parametric models, such as neural networks, are sufficient in the
load forecasting, separate estimates of fixed and shiftable loads are
beneficial to a wide range of applications such as distribution system
operational planning, load scheduling, energy trading, and utility demand
response programs. A semi-parametric estimation model is usually required,
where cost sensitivities of demands must be known. Existing research work
consistently uses somewhat arbitrary parameters that seem to work best. In this
paper, we propose a generic class of data-driven semiparametric models derived
from consumption data of residential consumers. A two-stage machine learning
approach is developed. In the first stage, disaggregation of the load into
fixed and shiftable components is accomplished by means of a hybrid algorithm
consisting of non-negative matrix factorization (NMF) and Gaussian mixture
models (GMM), with the latter trained by an expectation-maximization (EM)
algorithm. The fixed and shiftable loads are subject to analytic treatment with
economic considerations. In the second stage, the model parameters are
estimated using an L2-norm, epsilon-insensitive regression approach. Actual
energy usage data of two residential customers show the validity of the
proposed method.
Related papers
- SMILE: Zero-Shot Sparse Mixture of Low-Rank Experts Construction From Pre-Trained Foundation Models [85.67096251281191]
We present an innovative approach to model fusion called zero-shot Sparse MIxture of Low-rank Experts (SMILE) construction.
SMILE allows for the upscaling of source models into an MoE model without extra data or further training.
We conduct extensive experiments across diverse scenarios, such as image classification and text generation tasks, using full fine-tuning and LoRA fine-tuning.
arXiv Detail & Related papers (2024-08-19T17:32:15Z) - Few-Shot Load Forecasting Under Data Scarcity in Smart Grids: A Meta-Learning Approach [0.18641315013048293]
This paper proposes adapting an established model-agnostic meta-learning algorithm for short-term load forecasting.
The proposed method can rapidly adapt and generalize within any unknown load time series of arbitrary length.
The proposed model is evaluated using a dataset of historical load consumption data from real-world consumers.
arXiv Detail & Related papers (2024-06-09T18:59:08Z) - Variational Inference of Parameters in Opinion Dynamics Models [9.51311391391997]
This work uses variational inference to estimate the parameters of an opinion dynamics ABM.
We transform the inference process into an optimization problem suitable for automatic differentiation.
Our approach estimates both macroscopic (bounded confidence intervals and backfire thresholds) and microscopic ($200$ categorical, agent-level roles) more accurately than simulation-based and MCMC methods.
arXiv Detail & Related papers (2024-03-08T14:45:18Z) - Toward Theoretical Guidance for Two Common Questions in Practical
Cross-Validation based Hyperparameter Selection [72.76113104079678]
We show the first theoretical treatments of two common questions in cross-validation based hyperparameter selection.
We show that these generalizations can, respectively, always perform at least as well as always performing retraining or never performing retraining.
arXiv Detail & Related papers (2023-01-12T16:37:12Z) - Self-learning locally-optimal hypertuning using maximum entropy, and
comparison of machine learning approaches for estimating fatigue life in
composite materials [0.0]
We develop an ML nearest-neighbors-alike algorithm based on the principle of maximum entropy to predict fatigue damage.
The predictions achieve a good level of accuracy, similar to other ML algorithms.
arXiv Detail & Related papers (2022-10-19T12:20:07Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Machine Learning based Framework for Robust Price-Sensitivity Estimation
with Application to Airline Pricing [20.5282398019991]
We consider the problem of dynamic pricing of a product in the presence of feature-dependent price sensitivity.
We construct a flexible yet interpretable demand model where the price related part is parametric.
The remaining (nuisance) part of the model is non-parametric and can be modeled via sophisticated ML techniques.
arXiv Detail & Related papers (2022-05-04T03:35:12Z) - Time varying regression with hidden linear dynamics [74.9914602730208]
We revisit a model for time-varying linear regression that assumes the unknown parameters evolve according to a linear dynamical system.
Counterintuitively, we show that when the underlying dynamics are stable the parameters of this model can be estimated from data by combining just two ordinary least squares estimates.
arXiv Detail & Related papers (2021-12-29T23:37:06Z) - Variational Inference with NoFAS: Normalizing Flow with Adaptive
Surrogate for Computationally Expensive Models [7.217783736464403]
Use of sampling-based approaches such as Markov chain Monte Carlo may become intractable when each likelihood evaluation is computationally expensive.
New approaches combining variational inference with normalizing flow are characterized by a computational cost that grows only linearly with the dimensionality of the latent variable space.
We propose Normalizing Flow with Adaptive Surrogate (NoFAS), an optimization strategy that alternatively updates the normalizing flow parameters and the weights of a neural network surrogate model.
arXiv Detail & Related papers (2021-08-28T14:31:45Z) - On the Sparsity of Neural Machine Translation Models [65.49762428553345]
We investigate whether redundant parameters can be reused to achieve better performance.
Experiments and analyses are systematically conducted on different datasets and NMT architectures.
arXiv Detail & Related papers (2020-10-06T11:47:20Z) - Machine learning for causal inference: on the use of cross-fit
estimators [77.34726150561087]
Doubly-robust cross-fit estimators have been proposed to yield better statistical properties.
We conducted a simulation study to assess the performance of several estimators for the average causal effect (ACE)
When used with machine learning, the doubly-robust cross-fit estimators substantially outperformed all of the other estimators in terms of bias, variance, and confidence interval coverage.
arXiv Detail & Related papers (2020-04-21T23:09:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.