Automated Few-Shot Time Series Forecasting based on Bi-level Programming
- URL: http://arxiv.org/abs/2203.03328v1
- Date: Mon, 7 Mar 2022 12:15:14 GMT
- Title: Automated Few-Shot Time Series Forecasting based on Bi-level Programming
- Authors: Jiangjiao Xu, Ke Li
- Abstract summary: This paper develops a BiLO-Auto-TSF/ML framework that automates the optimal design of a few-shot learning pipeline from a bi-level programming perspective.
Comprehensive experiments fully demonstrate the effectiveness of our proposed BiLO-Auto-TSF/ML framework.
- Score: 5.760976250387322
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: New micro-grid design with renewable energy sources and battery storage
systems can help improve greenhouse gas emissions and reduce the operational
cost. To provide an effective short-/long-term forecasting of both energy
generation and load demand, time series predictive modeling has been one of the
key tools to guide the optimal decision-making for planning and operation. One
of the critical challenges of time series renewable energy forecasting is the
lack of historical data to train an adequate predictive model. Moreover, the
performance of a machine learning model is sensitive to the choice of its
corresponding hyperparameters. Bearing these considerations in mind, this paper
develops a BiLO-Auto-TSF/ML framework that automates the optimal design of a
few-shot learning pipeline from a bi-level programming perspective.
Specifically, the lower-level meta-learning helps boost the base-learner to
mitigate the small data challenge while the hyperparameter optimization at the
upper level proactively searches for the optimal hyperparameter configurations
for both base- and meta-learners. Note that the proposed framework is so
general that any off-the-shelf machine learning method can be used in a plug-in
manner. Comprehensive experiments fully demonstrate the effectiveness of our
proposed BiLO-Auto-TSF/ML framework to search for a high-performance few-shot
learning pipeline for various energy sources.
Related papers
- Optimization Hyper-parameter Laws for Large Language Models [56.322914260197734]
We present Opt-Laws, a framework that captures the relationship between hyper- parameters and training outcomes.
Our validation across diverse model sizes and data scales demonstrates Opt-Laws' ability to accurately predict training loss.
This approach significantly reduces computational costs while enhancing overall model performance.
arXiv Detail & Related papers (2024-09-07T09:37:19Z) - ETHER: Efficient Finetuning of Large-Scale Models with Hyperplane Reflections [59.839926875976225]
We propose the ETHER transformation family, which performs Efficient fineTuning via HypErplane Reflections.
In particular, we introduce ETHER and its relaxation ETHER+, which match or outperform existing PEFT methods with significantly fewer parameters.
arXiv Detail & Related papers (2024-05-30T17:26:02Z) - When Parameter-efficient Tuning Meets General-purpose Vision-language
Models [65.19127815275307]
PETAL revolutionizes the training process by requiring only 0.5% of the total parameters, achieved through a unique mode approximation technique.
Our experiments reveal that PETAL not only outperforms current state-of-the-art methods in most scenarios but also surpasses full fine-tuning models in effectiveness.
arXiv Detail & Related papers (2023-12-16T17:13:08Z) - auto-sktime: Automated Time Series Forecasting [18.640815949661903]
We introduce auto-sktime, a novel framework for automated time series forecasting.
The proposed framework uses the power of automated machine learning (AutoML) techniques to automate the creation of the entire forecasting pipeline.
Experimental results on 64 diverse real-world time series datasets demonstrate the effectiveness and efficiency of the framework.
arXiv Detail & Related papers (2023-12-13T21:34:30Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - Multi-step Planning for Automated Hyperparameter Optimization with
OptFormer [29.358188163138173]
We build on the recently proposed OptFormer model to make planning via rollouts simple and efficient.
We conduct extensive exploration of different strategies for performing multi-step planning on top of the OptFormer model to highlight its potential for use in constructing non-myopic HPO strategies.
arXiv Detail & Related papers (2022-10-10T19:07:59Z) - AI-based Optimal scheduling of Renewable AC Microgrids with
bidirectional LSTM-Based Wind Power Forecasting [5.039813366558306]
This paper proposes an effective framework for optimal scheduling of microgrids considering energy storage devices, wind turbines, micro turbines.
A deep learning model based on bidirectional long short-term memory is proposed to address the short-term wind power forecasting problem.
Results show the effective and efficient performance of the proposed framework in the optimal scheduling of microgrids.
arXiv Detail & Related papers (2022-07-08T14:40:31Z) - Real-time Forecast Models for TBM Load Parameters Based on Machine
Learning Methods [6.247628933072029]
In this paper, based on in-situ TBM operational data, we use the machine-learning (ML) methods to build the real-time forecast models for TBM load parameters.
To decrease the model complexity and improve the generalization, we also apply the least absolute shrinkage and selection (Lasso) method to extract the essential features of the forecast task.
arXiv Detail & Related papers (2021-04-12T07:31:39Z) - Learning Discrete Energy-based Models via Auxiliary-variable Local
Exploration [130.89746032163106]
We propose ALOE, a new algorithm for learning conditional and unconditional EBMs for discrete structured data.
We show that the energy function and sampler can be trained efficiently via a new variational form of power iteration.
We present an energy model guided fuzzer for software testing that achieves comparable performance to well engineered fuzzing engines like libfuzzer.
arXiv Detail & Related papers (2020-11-10T19:31:29Z) - Particle Swarm Optimized Federated Learning For Industrial IoT and Smart
City Services [9.693848515371268]
We propose a Particle Swarm Optimization (PSO)-based technique to optimize the hyper parameter settings for the local Machine Learning models.
We evaluate the performance of our proposed technique using two case studies.
arXiv Detail & Related papers (2020-09-05T16:20:47Z) - Multi-Agent Meta-Reinforcement Learning for Self-Powered and Sustainable
Edge Computing Systems [87.4519172058185]
An effective energy dispatch mechanism for self-powered wireless networks with edge computing capabilities is studied.
A novel multi-agent meta-reinforcement learning (MAMRL) framework is proposed to solve the formulated problem.
Experimental results show that the proposed MAMRL model can reduce up to 11% non-renewable energy usage and by 22.4% the energy cost.
arXiv Detail & Related papers (2020-02-20T04:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.