Exploring Model Learning Heterogeneity for Boosting Ensemble Robustness
- URL: http://arxiv.org/abs/2310.02237v1
- Date: Tue, 3 Oct 2023 17:47:25 GMT
- Title: Exploring Model Learning Heterogeneity for Boosting Ensemble Robustness
- Authors: Yanzhao Wu, Ka-Ho Chow, Wenqi Wei, Ling Liu
- Abstract summary: Deep neural network ensembles hold the potential of improving generalization performance for complex learning tasks.
This paper presents formal analysis and empirical evaluation of heterogeneous deep ensembles with high ensemble diversity.
- Score: 17.127312781074245
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural network ensembles hold the potential of improving generalization
performance for complex learning tasks. This paper presents formal analysis and
empirical evaluation to show that heterogeneous deep ensembles with high
ensemble diversity can effectively leverage model learning heterogeneity to
boost ensemble robustness. We first show that heterogeneous DNN models trained
for solving the same learning problem, e.g., object detection, can
significantly strengthen the mean average precision (mAP) through our weighted
bounding box ensemble consensus method. Second, we further compose ensembles of
heterogeneous models for solving different learning problems, e.g., object
detection and semantic segmentation, by introducing the connected component
labeling (CCL) based alignment. We show that this two-tier heterogeneity driven
ensemble construction method can compose an ensemble team that promotes high
ensemble diversity and low negative correlation among member models of the
ensemble, strengthening ensemble robustness against both negative examples and
adversarial attacks. Third, we provide a formal analysis of the ensemble
robustness in terms of negative correlation. Extensive experiments validate the
enhanced robustness of heterogeneous ensembles in both benign and adversarial
settings. The source codes are available on GitHub at
https://github.com/git-disl/HeteRobust.
Related papers
- Dynamic Post-Hoc Neural Ensemblers [55.15643209328513]
In this study, we explore employing neural networks as ensemble methods.
Motivated by the risk of learning low-diversity ensembles, we propose regularizing the model by randomly dropping base model predictions.
We demonstrate this approach lower bounds the diversity within the ensemble, reducing overfitting and improving generalization capabilities.
arXiv Detail & Related papers (2024-10-06T15:25:39Z) - Task Groupings Regularization: Data-Free Meta-Learning with Heterogeneous Pre-trained Models [83.02797560769285]
Data-Free Meta-Learning (DFML) aims to derive knowledge from a collection of pre-trained models without accessing their original data.
Current methods often overlook the heterogeneity among pre-trained models, which leads to performance degradation due to task conflicts.
We propose Task Groupings Regularization, a novel approach that benefits from model heterogeneity by grouping and aligning conflicting tasks.
arXiv Detail & Related papers (2024-05-26T13:11:55Z) - The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - Improved Robustness Against Adaptive Attacks With Ensembles and
Error-Correcting Output Codes [0.0]
We investigate the robustness of Error-Correcting Output Codes (ECOC) ensembles through architectural improvements and ensemble diversity promotion.
We perform a comprehensive robustness assessment against adaptive attacks and investigate the relationship between ensemble diversity and robustness.
arXiv Detail & Related papers (2023-03-04T05:05:17Z) - Deep Negative Correlation Classification [82.45045814842595]
Existing deep ensemble methods naively train many different models and then aggregate their predictions.
We propose deep negative correlation classification (DNCC)
DNCC yields a deep classification ensemble where the individual estimator is both accurate and negatively correlated.
arXiv Detail & Related papers (2022-12-14T07:35:20Z) - Orthogonal Ensemble Networks for Biomedical Image Segmentation [10.011414604407681]
We introduce Orthogonal Ensemble Networks (OEN), a novel framework to explicitly enforce model diversity.
We benchmark the proposed framework in two challenging brain lesion segmentation tasks.
The experimental results show that our approach produces more robust and well-calibrated ensemble models.
arXiv Detail & Related papers (2021-05-22T23:44:55Z) - Out-of-distribution Generalization via Partial Feature Decorrelation [72.96261704851683]
We present a novel Partial Feature Decorrelation Learning (PFDL) algorithm, which jointly optimize a feature decomposition network and the target image classification model.
The experiments on real-world datasets demonstrate that our method can improve the backbone model's accuracy on OOD image classification datasets.
arXiv Detail & Related papers (2020-07-30T05:48:48Z) - Neural Ensemble Search for Uncertainty Estimation and Dataset Shift [67.57720300323928]
Ensembles of neural networks achieve superior performance compared to stand-alone networks in terms of accuracy, uncertainty calibration and robustness to dataset shift.
We propose two methods for automatically constructing ensembles with emphvarying architectures.
We show that the resulting ensembles outperform deep ensembles not only in terms of accuracy but also uncertainty calibration and robustness to dataset shift.
arXiv Detail & Related papers (2020-06-15T17:38:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.