DICE: Diversity in Deep Ensembles via Conditional Redundancy Adversarial
Estimation
- URL: http://arxiv.org/abs/2101.05544v1
- Date: Thu, 14 Jan 2021 10:53:26 GMT
- Title: DICE: Diversity in Deep Ensembles via Conditional Redundancy Adversarial
Estimation
- Authors: Alexandre Rame and Matthieu Cord
- Abstract summary: Deep ensembles perform better than a single network thanks to the diversity among their members.
Recent approaches regularize predictions to increase diversity; however, they also drastically decrease individual members' performances.
We introduce a novel training criterion called DICE: it increases diversity by reducing spurious correlations among features.
- Score: 109.11580756757611
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep ensembles perform better than a single network thanks to the diversity
among their members. Recent approaches regularize predictions to increase
diversity; however, they also drastically decrease individual members'
performances. In this paper, we argue that learning strategies for deep
ensembles need to tackle the trade-off between ensemble diversity and
individual accuracies. Motivated by arguments from information theory and
leveraging recent advances in neural estimation of conditional mutual
information, we introduce a novel training criterion called DICE: it increases
diversity by reducing spurious correlations among features. The main idea is
that features extracted from pairs of members should only share information
useful for target class prediction without being conditionally redundant.
Therefore, besides the classification loss with information bottleneck, we
adversarially prevent features from being conditionally predictable from each
other. We manage to reduce simultaneous errors while protecting class
information. We obtain state-of-the-art accuracy results on CIFAR-10/100: for
example, an ensemble of 5 networks trained with DICE matches an ensemble of 7
networks trained independently. We further analyze the consequences on
calibration, uncertainty estimation, out-of-distribution detection and online
co-distillation.
Related papers
- The Curse of Diversity in Ensemble-Based Exploration [7.209197316045156]
Training a diverse ensemble of data-sharing agents can significantly impair the performance of the individual ensemble members.
We name this phenomenon the curse of diversity.
We demonstrate the potential of representation learning to counteract the curse of diversity.
arXiv Detail & Related papers (2024-05-07T14:14:50Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - Deep Negative Correlation Classification [82.45045814842595]
Existing deep ensemble methods naively train many different models and then aggregate their predictions.
We propose deep negative correlation classification (DNCC)
DNCC yields a deep classification ensemble where the individual estimator is both accurate and negatively correlated.
arXiv Detail & Related papers (2022-12-14T07:35:20Z) - Explicit Tradeoffs between Adversarial and Natural Distributional
Robustness [48.44639585732391]
In practice, models need to enjoy both types of robustness to ensure reliability.
In this work, we show that in fact, explicit tradeoffs exist between adversarial and natural distributional robustness.
arXiv Detail & Related papers (2022-09-15T19:58:01Z) - Robustness through Cognitive Dissociation Mitigation in Contrastive
Adversarial Training [2.538209532048867]
We introduce a novel neural network training framework that increases model's adversarial robustness to adversarial attacks.
We propose to improve model robustness to adversarial attacks by learning feature representations consistent under both data augmentations and adversarial perturbations.
We validate our method on the CIFAR-10 dataset on which it outperforms both robust accuracy and clean accuracy over alternative supervised and self-supervised adversarial learning methods.
arXiv Detail & Related papers (2022-03-16T21:41:27Z) - Sparsity Winning Twice: Better Robust Generalization from More Efficient
Training [94.92954973680914]
We introduce two alternatives for sparse adversarial training: (i) static sparsity and (ii) dynamic sparsity.
We find both methods to yield win-win: substantially shrinking the robust generalization gap and alleviating the robust overfitting.
Our approaches can be combined with existing regularizers, establishing new state-of-the-art results in adversarial training.
arXiv Detail & Related papers (2022-02-20T15:52:08Z) - Agree to Disagree: Diversity through Disagreement for Better
Transferability [54.308327969778155]
We propose D-BAT (Diversity-By-disAgreement Training), which enforces agreement among the models on the training data.
We show how D-BAT naturally emerges from the notion of generalized discrepancy.
arXiv Detail & Related papers (2022-02-09T12:03:02Z) - Understanding the Logit Distributions of Adversarially-Trained Deep
Neural Networks [6.439477789066243]
Adversarial defenses train deep neural networks to be invariant to the input perturbations from adversarial attacks.
Although adversarial training is successful at mitigating adversarial attacks, the behavioral differences between adversarially-trained (AT) models and standard models are still poorly understood.
We identify three logit characteristics essential to learning adversarial robustness.
arXiv Detail & Related papers (2021-08-26T19:09:15Z) - Imbalanced Data Learning by Minority Class Augmentation using Capsule
Adversarial Networks [31.073558420480964]
We propose a method to restore the balance in imbalanced images, by coalescing two concurrent methods.
In our model, generative and discriminative networks play a novel competitive game.
The coalescing of capsule-GAN is effective at recognizing highly overlapping classes with much fewer parameters compared with the convolutional-GAN.
arXiv Detail & Related papers (2020-04-05T12:36:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.