Ensembling over Classifiers: a Bias-Variance Perspective
- URL: http://arxiv.org/abs/2206.10566v1
- Date: Tue, 21 Jun 2022 17:46:35 GMT
- Title: Ensembling over Classifiers: a Bias-Variance Perspective
- Authors: Neha Gupta, Jamie Smith, Ben Adlam, Zelda Mariet
- Abstract summary: We build upon the extension to the bias-variance decomposition by Pfau (2013) in order to gain crucial insights into the behavior of ensembles of classifiers.
We show that conditional estimates necessarily incur an irreducible error.
Empirically, standard ensembling reducesthe bias, leading us to hypothesize that ensembles of classifiers may perform well in part because of this unexpected reduction.
- Score: 13.006468721874372
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Ensembles are a straightforward, remarkably effective method for improving
the accuracy,calibration, and robustness of models on classification tasks;
yet, the reasons that underlie their success remain an active area of research.
We build upon the extension to the bias-variance decomposition by Pfau (2013)
in order to gain crucial insights into the behavior of ensembles of
classifiers. Introducing a dual reparameterization of the bias-variance
tradeoff, we first derive generalized laws of total expectation and variance
for nonsymmetric losses typical of classification tasks. Comparing conditional
and bootstrap bias/variance estimates, we then show that conditional estimates
necessarily incur an irreducible error. Next, we show that ensembling in dual
space reduces the variance and leaves the bias unchanged, whereas standard
ensembling can arbitrarily affect the bias. Empirically, standard ensembling
reducesthe bias, leading us to hypothesize that ensembles of classifiers may
perform well in part because of this unexpected reduction.We conclude by an
empirical analysis of recent deep learning methods that ensemble over
hyperparameters, revealing that these techniques indeed favor bias reduction.
This suggests that, contrary to classical wisdom, targeting bias reduction may
be a promising direction for classifier ensembles.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.