Optimizing model-agnostic Random Subspace ensembles
- URL: http://arxiv.org/abs/2109.03099v1
- Date: Tue, 7 Sep 2021 13:58:23 GMT
- Title: Optimizing model-agnostic Random Subspace ensembles
- Authors: V\^an Anh Huynh-Thu and Pierre Geurts
- Abstract summary: We present a model-agnostic ensemble approach for supervised learning.
The proposed approach alternates between learning an ensemble of models using a parametric version of the Random Subspace approach.
We show the good performance of the proposed approach, both in terms of prediction and feature ranking, on simulated and real-world datasets.
- Score: 5.680512932725364
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a model-agnostic ensemble approach for supervised
learning. The proposed approach alternates between (1) learning an ensemble of
models using a parametric version of the Random Subspace approach, in which
feature subsets are sampled according to Bernoulli distributions, and (2)
identifying the parameters of the Bernoulli distributions that minimize the
generalization error of the ensemble model. Parameter optimization is rendered
tractable by using an importance sampling approach able to estimate the
expected model output for any given parameter set, without the need to learn
new models. While the degree of randomization is controlled by a
hyper-parameter in standard Random Subspace, it has the advantage to be
automatically tuned in our parametric version. Furthermore, model-agnostic
feature importance scores can be easily derived from the trained ensemble
model. We show the good performance of the proposed approach, both in terms of
prediction and feature ranking, on simulated and real-world datasets. We also
show that our approach can be successfully used for the reconstruction of gene
regulatory networks.
Related papers
- Scaling and renormalization in high-dimensional regression [72.59731158970894]
This paper presents a succinct derivation of the training and generalization performance of a variety of high-dimensional ridge regression models.
We provide an introduction and review of recent results on these topics, aimed at readers with backgrounds in physics and deep learning.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - Variational autoencoder with weighted samples for high-dimensional
non-parametric adaptive importance sampling [0.0]
We extend the existing framework to the case of weighted samples by introducing a new objective function.
In order to add flexibility to the model and to be able to learn multimodal distributions, we consider a learnable prior distribution.
We exploit the proposed procedure in existing adaptive importance sampling algorithms to draw points from a target distribution and to estimate a rare event probability in high dimension.
arXiv Detail & Related papers (2023-10-13T15:40:55Z) - Joint Bayesian Inference of Graphical Structure and Parameters with a
Single Generative Flow Network [59.79008107609297]
We propose in this paper to approximate the joint posterior over the structure of a Bayesian Network.
We use a single GFlowNet whose sampling policy follows a two-phase process.
Since the parameters are included in the posterior distribution, this leaves more flexibility for the local probability models.
arXiv Detail & Related papers (2023-05-30T19:16:44Z) - Trajectory-oriented optimization of stochastic epidemiological models [0.873811641236639]
Epidemiological models must be calibrated to ground truth for downstream tasks.
We propose a class of Gaussian process (GP) surrogates along with an optimization strategy based on Thompson sampling.
This Trajectory Oriented Optimization (TOO) approach produces actual trajectories close to the empirical observations.
arXiv Detail & Related papers (2023-05-06T04:45:49Z) - Performative Prediction with Bandit Feedback: Learning through Reparameterization [23.039885534575966]
performative prediction is a framework for studying social prediction in which the data distribution itself changes in response to the deployment of a model.
We develop a reparametization that reparametrizes the performative prediction objective as a function of induced data distribution.
arXiv Detail & Related papers (2023-05-01T21:31:29Z) - Hierarchical Collaborative Hyper-parameter Tuning [0.0]
Hyper- parameter tuning is among the most critical stages in building machine learning solutions.
This paper demonstrates how multi-agent systems can be utilized to develop a distributed technique for determining near-optimal values.
arXiv Detail & Related papers (2022-05-11T05:16:57Z) - Learning Structured Gaussians to Approximate Deep Ensembles [10.055143995729415]
This paper proposes using a sparse-structured multivariate Gaussian to provide a closed-form approxorimator for dense image prediction tasks.
We capture the uncertainty and structured correlations in the predictions explicitly in a formal distribution, rather than implicitly through sampling alone.
We demonstrate the merits of our approach on monocular depth estimation and show that the advantages of our approach are obtained with comparable quantitative performance.
arXiv Detail & Related papers (2022-03-29T12:34:43Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - Oops I Took A Gradient: Scalable Sampling for Discrete Distributions [53.3142984019796]
We show that this approach outperforms generic samplers in a number of difficult settings.
We also demonstrate the use of our improved sampler for training deep energy-based models on high dimensional discrete data.
arXiv Detail & Related papers (2021-02-08T20:08:50Z) - Model Fusion with Kullback--Leibler Divergence [58.20269014662046]
We propose a method to fuse posterior distributions learned from heterogeneous datasets.
Our algorithm relies on a mean field assumption for both the fused model and the individual dataset posteriors.
arXiv Detail & Related papers (2020-07-13T03:27:45Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.