Evolutionary Ensemble Learning for Multivariate Time Series Prediction
- URL: http://arxiv.org/abs/2108.09659v1
- Date: Sun, 22 Aug 2021 07:36:25 GMT
- Title: Evolutionary Ensemble Learning for Multivariate Time Series Prediction
- Authors: Hui Song, A. K. Qin, Flora D. Salim
- Abstract summary: A typical pipeline of building an MTS prediction model (PM) consists of selecting a subset of channels among all available ones.
We propose a novel evolutionary ensemble learning framework to optimize the entire pipeline in a holistic manner.
We implement the proposed framework and evaluate our implementation on two real-world applications, i.e., electricity consumption prediction and air quality prediction.
- Score: 6.736731623634526
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multivariate time series (MTS) prediction plays a key role in many fields
such as finance, energy and transport, where each individual time series
corresponds to the data collected from a certain data source, so-called
channel. A typical pipeline of building an MTS prediction model (PM) consists
of selecting a subset of channels among all available ones, extracting features
from the selected channels, and building a PM based on the extracted features,
where each component involves certain optimization tasks, i.e., selection of
channels, feature extraction (FE) methods, and PMs as well as configuration of
the selected FE method and PM. Accordingly, pursuing the best prediction
performance corresponds to optimizing the pipeline by solving all of its
involved optimization problems. This is a non-trivial task due to the vastness
of the solution space. Different from most of the existing works which target
at optimizing certain components of the pipeline, we propose a novel
evolutionary ensemble learning framework to optimize the entire pipeline in a
holistic manner. In this framework, a specific pipeline is encoded as a
candidate solution and a multi-objective evolutionary algorithm is applied
under different population sizes to produce multiple Pareto optimal sets
(POSs). Finally, selective ensemble learning is designed to choose the optimal
subset of solutions from the POSs and combine them to yield final prediction by
using greedy sequential selection and least square methods. We implement the
proposed framework and evaluate our implementation on two real-world
applications, i.e., electricity consumption prediction and air quality
prediction. The performance comparison with state-of-the-art techniques
demonstrates the superiority of the proposed approach.
Related papers
- QMetro++ -- Python optimization package for large scale quantum metrology with customized strategy structures [0.0]
QMetro++ is a Python package dedicated to identifying optimal estimation protocols.<n>The package comes with an implementation of the recently developed methods for computing fundamental upper bounds on QFI.
arXiv Detail & Related papers (2025-06-19T18:13:22Z) - Subset Selection for Fine-Tuning: A Utility-Diversity Balanced Approach for Mathematical Domain Adaptation [0.0]
We propose a refined approach to efficiently fine-tune large language models (LLMs) on specific domains like the mathematical domain.<n>Our approach combines utility and diversity metrics to select the most informative and representative training examples.
arXiv Detail & Related papers (2025-05-02T18:20:44Z) - SPIO: Ensemble and Selective Strategies via LLM-Based Multi-Agent Planning in Automated Data Science [1.1343849658875087]
Large Language Models (LLMs) have revolutionized automated data analytics and machine learning by enabling dynamic reasoning and adaptability.
We propose SPIO, a novel framework that orchestrates multi-agent planning across four key modules.
In each module, dedicated planning agents independently generate candidate strategies that cascade into subsequent stages, fostering comprehensive exploration.
arXiv Detail & Related papers (2025-03-30T04:45:32Z) - Preference-Guided Diffusion for Multi-Objective Offline Optimization [64.08326521234228]
We propose a preference-guided diffusion model for offline multi-objective optimization.
Our guidance is a preference model trained to predict the probability that one design dominates another.
Our results highlight the effectiveness of classifier-guided diffusion models in generating diverse and high-quality solutions.
arXiv Detail & Related papers (2025-03-21T16:49:38Z) - ParetoFlow: Guided Flows in Multi-Objective Optimization [12.358524770639136]
In offline multi-objective optimization (MOO), we leverage an offline dataset of designs their associated labels to simultaneously multiple objectives.
Recent iteration mainly employ evolutionary and Bayesian optimization, with limited attention given to the generative capabilities inherent in data.
Our method achieves state-of-the-art performance across various tasks.
arXiv Detail & Related papers (2024-12-04T21:14:18Z) - Learning Submodular Sequencing from Samples [11.528995186765751]
This paper addresses the problem of selecting and ranking items in a sequence to optimize some composite submodular function.
We present an algorithm that achieves an approximation ratio dependent on the curvature of the individual submodular functions.
arXiv Detail & Related papers (2024-09-09T01:33:13Z) - An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - A Refreshed Similarity-based Upsampler for Direct High-Ratio Feature Upsampling [54.05517338122698]
A popular similarity-based feature upsampling pipeline has been proposed, which utilizes a high-resolution feature as guidance.
We propose an explicitly controllable query-key feature alignment from both semantic-aware and detail-aware perspectives.
We develop a fine-grained neighbor selection strategy on HR features, which is simple yet effective for alleviating mosaic artifacts.
arXiv Detail & Related papers (2024-07-02T14:12:21Z) - Column and row subset selection using nuclear scores: algorithms and theory for Nyström approximation, CUR decomposition, and graph Laplacian reduction [0.0]
We develop unified methodologies for fast, efficient, and theoretically guaranteed column selection.
First, we derive and implement a sparsity-exploiting deterministic algorithm applicable to tasks including kernel approximation and CUR decomposition.
Next, we develop a matrix-free formalism relying on a randomization scheme satisfying guaranteed concentration bounds.
arXiv Detail & Related papers (2024-07-01T18:10:19Z) - Adaptive Preference Scaling for Reinforcement Learning with Human Feedback [103.36048042664768]
Reinforcement learning from human feedback (RLHF) is a prevalent approach to align AI systems with human values.
We propose a novel adaptive preference loss, underpinned by distributionally robust optimization (DRO)
Our method is versatile and can be readily adapted to various preference optimization frameworks.
arXiv Detail & Related papers (2024-06-04T20:33:22Z) - Random Aggregate Beamforming for Over-the-Air Federated Learning in Large-Scale Networks [66.18765335695414]
We consider a joint device selection and aggregate beamforming design with the objectives of minimizing the aggregate error and maximizing the number of selected devices.
To tackle the problems in a cost-effective manner, we propose a random aggregate beamforming-based scheme.
We additionally use analysis to study the obtained aggregate error and the number of the selected devices when the number of devices becomes large.
arXiv Detail & Related papers (2024-02-20T23:59:45Z) - Embedded feature selection in LSTM networks with multi-objective
evolutionary ensemble learning for time series forecasting [49.1574468325115]
We present a novel feature selection method embedded in Long Short-Term Memory networks.
Our approach optimize the weights and biases of the LSTM in a partitioned manner.
Experimental evaluations on air quality time series data from Italy and southeast Spain demonstrate that our method substantially improves the ability generalization of conventional LSTMs.
arXiv Detail & Related papers (2023-12-29T08:42:10Z) - Benchmarking PtO and PnO Methods in the Predictive Combinatorial Optimization Regime [59.27851754647913]
Predictive optimization is the precise modeling of many real-world applications, including energy cost-aware scheduling and budget allocation on advertising.
We develop a modular framework to benchmark 11 existing PtO/PnO methods on 8 problems, including a new industrial dataset for advertising.
Our study shows that PnO approaches are better than PtO on 7 out of 8 benchmarks, but there is no silver bullet found for the specific design choices of PnO.
arXiv Detail & Related papers (2023-11-13T13:19:34Z) - Optimizing accuracy and diversity: a multi-task approach to forecast
combinations [0.0]
We present a multi-task optimization paradigm that focuses on solving both problems simultaneously.
It incorporates an additional learning and optimization task into the standard feature-based forecasting approach.
The proposed approach elicits the essential role of diversity in feature-based forecasting.
arXiv Detail & Related papers (2023-10-31T15:26:33Z) - Evolutionary Solution Adaption for Multi-Objective Metal Cutting Process
Optimization [59.45414406974091]
We introduce a framework for system flexibility that allows us to study the ability of an algorithm to transfer solutions from previous optimization tasks.
We study the flexibility of NSGA-II, which we extend by two variants: 1) varying goals, that optimize solutions for two tasks simultaneously to obtain in-between source solutions expected to be more adaptable, and 2) active-inactive genotype, that accommodates different possibilities that can be activated or deactivated.
Results show that adaption with standard NSGA-II greatly reduces the number of evaluations required for optimization to a target goal, while the proposed variants further improve the adaption costs.
arXiv Detail & Related papers (2023-05-31T12:07:50Z) - Batched Data-Driven Evolutionary Multi-Objective Optimization Based on
Manifold Interpolation [6.560512252982714]
We propose a framework for implementing batched data-driven evolutionary multi-objective optimization.
It is so general that any off-the-shelf evolutionary multi-objective optimization algorithms can be applied in a plug-in manner.
Our proposed framework is featured with a faster convergence and a stronger resilience to various PF shapes.
arXiv Detail & Related papers (2021-09-12T23:54:26Z) - Joint Adaptive Graph and Structured Sparsity Regularization for
Unsupervised Feature Selection [6.41804410246642]
We propose a joint adaptive graph and structured sparsity regularization unsupervised feature selection (JASFS) method.
A subset of optimal features will be selected in group, and the number of selected features will be determined automatically.
Experimental results on eight benchmarks demonstrate the effectiveness and efficiency of the proposed method.
arXiv Detail & Related papers (2020-10-09T08:17:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.