DJEnsemble: On the Selection of a Disjoint Ensemble of Deep Learning
Black-Box Spatio-Temporal Models
- URL: http://arxiv.org/abs/2005.11093v3
- Date: Tue, 17 Nov 2020 15:56:46 GMT
- Title: DJEnsemble: On the Selection of a Disjoint Ensemble of Deep Learning
Black-Box Spatio-Temporal Models
- Authors: Yania Molina Souto, Rafael Pereira, Roc\'io Zorrilla, Anderson Chaves,
Brian Tsan, Florin Rusu, Eduardo Ogasawara, Artur Ziviani, Fabio Porto
- Abstract summary: We present a cost-based approach for the automatic selection and allocation of a disjoint ensemble of black-box predictors.
We show that our model produces plans with performance close to the actual best plan.
- Score: 0.8347559086129669
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we present a cost-based approach for the automatic selection
and allocation of a disjoint ensemble of black-box predictors to answer
predictive spatio-temporal queries. Our approach is divided into two parts --
offline and online. During the offline part, we preprocess the predictive
domain data -- transforming it into a regular grid -- and the black-box models
-- computing their spatio-temporal learning function. In the online part, we
compute a DJEnsemble plan which minimizes a multivariate cost function based on
estimates for the prediction error and the execution cost -- producing a model
spatial allocation matrix -- and run the optimal ensemble plan. We conduct a
set of extensive experiments that evaluate the DJEnsemble approach and
highlight its efficiency. We show that our cost model produces plans with
performance close to the actual best plan. When compared against the
traditional ensemble approach, DJEnsemble achieves up to $4X$ improvement in
execution time and almost $9X$ improvement in prediction accuracy. To the best
of our knowledge, this is the first work to solve the problem of optimizing the
allocation of black-box models to answer predictive spatio-temporal queries.
Related papers
- Diffusion Model for Data-Driven Black-Box Optimization [54.25693582870226]
We focus on diffusion models, a powerful generative AI technology, and investigate their potential for black-box optimization.
We study two practical types of labels: 1) noisy measurements of a real-valued reward function and 2) human preference based on pairwise comparisons.
Our proposed method reformulates the design optimization problem into a conditional sampling problem, which allows us to leverage the power of diffusion models.
arXiv Detail & Related papers (2024-03-20T00:41:12Z) - Experiment Planning with Function Approximation [49.50254688629728]
We study the problem of experiment planning with function approximation in contextual bandit problems.
We propose two experiment planning strategies compatible with function approximation.
We show that a uniform sampler achieves competitive optimality rates in the setting where the number of actions is small.
arXiv Detail & Related papers (2024-01-10T14:40:23Z) - Constrained Online Two-stage Stochastic Optimization: Algorithm with
(and without) Predictions [19.537289123577022]
We consider an online two-stage optimization with long-term constraints over a finite horizon of $T$ periods.
We develop online algorithms for the online two-stage problem from adversarial learning algorithms.
arXiv Detail & Related papers (2024-01-02T07:46:33Z) - Optimizing accuracy and diversity: a multi-task approach to forecast
combinations [0.0]
We present a multi-task optimization paradigm that focuses on solving both problems simultaneously.
It incorporates an additional learning and optimization task into the standard feature-based forecasting approach.
The proposed approach elicits the essential role of diversity in feature-based forecasting.
arXiv Detail & Related papers (2023-10-31T15:26:33Z) - Constrained Online Two-stage Stochastic Optimization: Near Optimal Algorithms via Adversarial Learning [1.994307489466967]
We consider an online two-stage optimization with long-term constraints over a finite horizon of $T$ periods.
We develop online algorithms for the online two-stage problem from adversarial learning algorithms.
arXiv Detail & Related papers (2023-02-02T10:33:09Z) - MILO: Model-Agnostic Subset Selection Framework for Efficient Model
Training and Tuning [68.12870241637636]
We propose MILO, a model-agnostic subset selection framework that decouples the subset selection from model training.
Our empirical results indicate that MILO can train models $3times - 10 times$ faster and tune hyperparameters $20times - 75 times$ faster than full-dataset training or tuning without performance.
arXiv Detail & Related papers (2023-01-30T20:59:30Z) - Smoothed Online Combinatorial Optimization Using Imperfect Predictions [27.201074566335222]
We study smoothed online optimization problems when an imperfect predictive model is available.
We show that using predictions to plan for a finite time horizon leads to regret dependent on the total predictive uncertainty and an additional switching cost.
Our algorithm shows a significant improvement in cumulative regret compared to other baselines in synthetic online distributed streaming problems.
arXiv Detail & Related papers (2022-04-23T02:30:39Z) - Markdowns in E-Commerce Fresh Retail: A Counterfactual Prediction and
Multi-Period Optimization Approach [29.11201102550876]
We build a semi-parametric structural model to learn individual price elasticity and predict counterfactual demand.
We propose a multi-period dynamic pricing algorithm to maximize the overall profit of a perishable product over its finite selling horizon.
The proposed framework has been successfully deployed to the well-known e-commerce fresh retail scenario - Freshippo.
arXiv Detail & Related papers (2021-05-18T07:01:37Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - Fast Rates for Contextual Linear Optimization [52.39202699484225]
We show that a naive plug-in approach achieves regret convergence rates that are significantly faster than methods that directly optimize downstream decision performance.
Our results are overall positive for practice: predictive models are easy and fast to train using existing tools, simple to interpret, and, as we show, lead to decisions that perform very well.
arXiv Detail & Related papers (2020-11-05T18:43:59Z) - Stepwise Model Selection for Sequence Prediction via Deep Kernel
Learning [100.83444258562263]
We propose a novel Bayesian optimization (BO) algorithm to tackle the challenge of model selection in this setting.
In order to solve the resulting multiple black-box function optimization problem jointly and efficiently, we exploit potential correlations among black-box functions.
We are the first to formulate the problem of stepwise model selection (SMS) for sequence prediction, and to design and demonstrate an efficient joint-learning algorithm for this purpose.
arXiv Detail & Related papers (2020-01-12T09:42:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.