On the Certified Robustness for Ensemble Models and Beyond
- URL: http://arxiv.org/abs/2107.10873v1
- Date: Thu, 22 Jul 2021 18:10:41 GMT
- Title: On the Certified Robustness for Ensemble Models and Beyond
- Authors: Zhuolin Yang, Linyi Li, Xiaojun Xu, Bhavya Kailkhura, Tao Xie, Bo Li
- Abstract summary: Deep neural networks (DNN) are vulnerable to adversarial examples, which aim to mislead them.
We analyze and provide the certified robustness for ensemble ML models.
Inspired by the theoretical findings, we propose the lightweight Diversity Regularized Training (DRT) to train certifiably robust ensemble ML models.
- Score: 22.43134152931209
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recent studies show that deep neural networks (DNN) are vulnerable to
adversarial examples, which aim to mislead DNNs by adding perturbations with
small magnitude. To defend against such attacks, both empirical and theoretical
defense approaches have been extensively studied for a single ML model. In this
work, we aim to analyze and provide the certified robustness for ensemble ML
models, together with the sufficient and necessary conditions of robustness for
different ensemble protocols. Although ensemble models are shown more robust
than a single model empirically; surprisingly, we find that in terms of the
certified robustness the standard ensemble models only achieve marginal
improvement compared to a single model. Thus, to explore the conditions that
guarantee to provide certifiably robust ensemble ML models, we first prove that
diversified gradient and large confidence margin are sufficient and necessary
conditions for certifiably robust ensemble models under the model-smoothness
assumption. We then provide the bounded model-smoothness analysis based on the
proposed Ensemble-before-Smoothing strategy. We also prove that an ensemble
model can always achieve higher certified robustness than a single base model
under mild conditions. Inspired by the theoretical findings, we propose the
lightweight Diversity Regularized Training (DRT) to train certifiably robust
ensemble ML models. Extensive experiments show that our DRT enhanced ensembles
can consistently achieve higher certified robustness than existing single and
ensemble ML models, demonstrating the state-of-the-art certified L2-robustness
on MNIST, CIFAR-10, and ImageNet datasets.
Related papers
- EMR-Merging: Tuning-Free High-Performance Model Merging [55.03509900949149]
We show that Elect, Mask & Rescale-Merging (EMR-Merging) shows outstanding performance compared to existing merging methods.
EMR-Merging is tuning-free, thus requiring no data availability or any additional training while showing impressive performance.
arXiv Detail & Related papers (2024-05-23T05:25:45Z) - Benchmarking Zero-Shot Robustness of Multimodal Foundation Models: A Pilot Study [61.65123150513683]
multimodal foundation models, such as CLIP, produce state-of-the-art zero-shot results.
It is reported that these models close the robustness gap by matching the performance of supervised models trained on ImageNet.
We show that CLIP leads to a significant robustness drop compared to supervised ImageNet models on our benchmark.
arXiv Detail & Related papers (2024-03-15T17:33:49Z) - Latent Semantic Consensus For Deterministic Geometric Model Fitting [109.44565542031384]
We propose an effective method called Latent Semantic Consensus (LSC)
LSC formulates the model fitting problem into two latent semantic spaces based on data points and model hypotheses.
LSC is able to provide consistent and reliable solutions within only a few milliseconds for general multi-structural model fitting.
arXiv Detail & Related papers (2024-03-11T05:35:38Z) - Multi-View Conformal Learning for Heterogeneous Sensor Fusion [0.12086712057375555]
We build and test multi-view and single-view conformal models for heterogeneous sensor fusion.
Our models provide theoretical marginal confidence guarantees since they are based on the conformal prediction framework.
Our results also showed that multi-view models generate prediction sets with less uncertainty compared to single-view models.
arXiv Detail & Related papers (2024-02-19T17:30:09Z) - Towards Calibrated Robust Fine-Tuning of Vision-Language Models [97.19901765814431]
This work proposes a robust fine-tuning method that improves both OOD accuracy and confidence calibration simultaneously in vision language models.
We show that both OOD classification and OOD calibration errors have a shared upper bound consisting of two terms of ID data.
Based on this insight, we design a novel framework that conducts fine-tuning with a constrained multimodal contrastive loss enforcing a larger smallest singular value.
arXiv Detail & Related papers (2023-11-03T05:41:25Z) - Certifying Ensembles: A General Certification Theory with
S-Lipschitzness [128.2881318211724]
Ensembling has shown to be beneficial for generalisation, uncertainty estimation, calibration, and mitigating the effects of concept drift.
In this work, we generalise Lipschitz continuity by introducing S-Lipschitz classifiers, which we use to analyse the theoretical robustness of ensembles.
arXiv Detail & Related papers (2023-04-25T17:50:45Z) - Sparse MoEs meet Efficient Ensembles [49.313497379189315]
We study the interplay of two popular classes of such models: ensembles of neural networks and sparse mixture of experts (sparse MoEs)
We present Efficient Ensemble of Experts (E$3$), a scalable and simple ensemble of sparse MoEs that takes the best of both classes of models, while using up to 45% fewer FLOPs than a deep ensemble.
arXiv Detail & Related papers (2021-10-07T11:58:35Z) - Enhancing Certified Robustness via Smoothed Weighted Ensembling [7.217295098686032]
We employ a Smoothed WEighted ENsembling scheme to improve the performance of randomized smoothed classifiers.
We show the ensembling generality that SWEEN can help achieve optimal certified robustness.
We also develop an adaptive prediction algorithm to reduce the prediction and certification cost of SWEEN models.
arXiv Detail & Related papers (2020-05-19T11:13:43Z) - Revisiting Ensembles in an Adversarial Context: Improving Natural
Accuracy [5.482532589225552]
There is still a significant gap in natural accuracy between robust and non-robust models.
We consider a number of ensemble methods designed to mitigate this performance difference.
We consider two schemes, one that combines predictions from several randomly robust models, and the other that fuses features from robust and standard models.
arXiv Detail & Related papers (2020-02-26T15:45:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.