A Meta-learning based Distribution System Load Forecasting Model
Selection Framework
- URL: http://arxiv.org/abs/2009.12001v2
- Date: Thu, 15 Apr 2021 21:14:06 GMT
- Title: A Meta-learning based Distribution System Load Forecasting Model
Selection Framework
- Authors: Yiyan Li, Si Zhang, Rongxing Hu, Ning Lu
- Abstract summary: The framework includes the following processes: feature extraction, candidate model labeling, offline training, and online model recommendation.
Using user load forecasting needs as input features, multiple meta-learners are used to rank the available load forecast models based on their forecasting accuracy.
A scoring-voting mechanism weights recommendations from each meta-leaner to make the final recommendations.
- Score: 6.499433762038562
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a meta-learning based, automatic distribution system load
forecasting model selection framework. The framework includes the following
processes: feature extraction, candidate model labeling, offline training, and
online model recommendation. Using user load forecasting needs as input
features, multiple meta-learners are used to rank the available load forecast
models based on their forecasting accuracy. Then, a scoring-voting mechanism
weights recommendations from each meta-leaner to make the final
recommendations. Heterogeneous load forecasting tasks with different temporal
and technical requirements at different load aggregation levels are set up to
train, validate, and test the performance of the proposed framework. Simulation
results demonstrate that the performance of the meta-learning based approach is
satisfactory in both seen and unseen forecasting tasks.
Related papers
- Design and Scheduling of an AI-based Queueing System [12.763457245603824]
We consider a large queueing system where the class of a job is estimated using a prediction model.
By characterizing the impact of mispredictions on congestion cost in heavy traffic, we design an index-based policy that incorporates the predicted class information in a near-optimal manner.
arXiv Detail & Related papers (2024-06-11T00:01:42Z) - Deep Neural Network Benchmarks for Selective Classification [27.098996474946446]
Multiple selective classification frameworks exist, most of which rely on deep neural network architectures.
We evaluate these approaches using several criteria, including selective error rate, empirical coverage, distribution of rejected instance's classes, and performance on out-of-distribution instances.
arXiv Detail & Related papers (2024-01-23T12:15:47Z) - On the adaptation of in-context learners for system identification [0.0]
In-context system identification aims at constructing meta-models to describe classes of systems.
We show how meta-model adaptation can enhance predictive performance in three realistic scenarios.
arXiv Detail & Related papers (2023-12-07T06:51:55Z) - Optimizing accuracy and diversity: a multi-task approach to forecast
combinations [0.0]
We present a multi-task optimization paradigm that focuses on solving both problems simultaneously.
It incorporates an additional learning and optimization task into the standard feature-based forecasting approach.
The proposed approach elicits the essential role of diversity in feature-based forecasting.
arXiv Detail & Related papers (2023-10-31T15:26:33Z) - How to Estimate Model Transferability of Pre-Trained Speech Models? [84.11085139766108]
"Score-based assessment" framework for estimating transferability of pre-trained speech models.
We leverage upon two representation theories, Bayesian likelihood estimation and optimal transport, to generate rank scores for the PSM candidates.
Our framework efficiently computes transferability scores without actual fine-tuning of candidate models or layers.
arXiv Detail & Related papers (2023-06-01T04:52:26Z) - PAMI: partition input and aggregate outputs for model interpretation [69.42924964776766]
In this study, a simple yet effective visualization framework called PAMI is proposed based on the observation that deep learning models often aggregate features from local regions for model predictions.
The basic idea is to mask majority of the input and use the corresponding model output as the relative contribution of the preserved input part to the original model prediction.
Extensive experiments on multiple tasks confirm the proposed method performs better than existing visualization approaches in more precisely finding class-specific input regions.
arXiv Detail & Related papers (2023-02-07T08:48:34Z) - fETSmcs: Feature-based ETS model component selection [8.99236558175168]
We propose an efficient approach for ETS model selection by training classifiers on simulated data to predict appropriate model component forms for a given time series.
We evaluate our approach on the widely used forecasting competition data set M4 in terms of both point forecasts and prediction intervals.
arXiv Detail & Related papers (2022-06-26T13:52:43Z) - What do we expect from Multiple-choice QA Systems? [70.86513724662302]
We consider a top performing model on several Multiple Choice Question Answering (MCQA) datasets.
We evaluate it against a set of expectations one might have from such a model, using a series of zero-information perturbations of the model's inputs.
arXiv Detail & Related papers (2020-11-20T21:27:10Z) - Few-shot Classification via Adaptive Attention [93.06105498633492]
We propose a novel few-shot learning method via optimizing and fast adapting the query sample representation based on very few reference samples.
As demonstrated experimentally, the proposed model achieves state-of-the-art classification results on various benchmark few-shot classification and fine-grained recognition datasets.
arXiv Detail & Related papers (2020-08-06T05:52:59Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z) - Meta-learning framework with applications to zero-shot time-series
forecasting [82.61728230984099]
This work provides positive evidence using a broad meta-learning framework.
residual connections act as a meta-learning adaptation mechanism.
We show that it is viable to train a neural network on a source TS dataset and deploy it on a different target TS dataset without retraining.
arXiv Detail & Related papers (2020-02-07T16:39:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.