Improved Robustness Against Adaptive Attacks With Ensembles and
Error-Correcting Output Codes
- URL: http://arxiv.org/abs/2303.02322v1
- Date: Sat, 4 Mar 2023 05:05:17 GMT
- Title: Improved Robustness Against Adaptive Attacks With Ensembles and
Error-Correcting Output Codes
- Authors: Thomas Philippon and Christian Gagn\'e
- Abstract summary: We investigate the robustness of Error-Correcting Output Codes (ECOC) ensembles through architectural improvements and ensemble diversity promotion.
We perform a comprehensive robustness assessment against adaptive attacks and investigate the relationship between ensemble diversity and robustness.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural network ensembles have been studied extensively in the context of
adversarial robustness and most ensemble-based approaches remain vulnerable to
adaptive attacks. In this paper, we investigate the robustness of
Error-Correcting Output Codes (ECOC) ensembles through architectural
improvements and ensemble diversity promotion. We perform a comprehensive
robustness assessment against adaptive attacks and investigate the relationship
between ensemble diversity and robustness. Our results demonstrate the benefits
of ECOC ensembles for adversarial robustness compared to regular ensembles of
convolutional neural networks (CNNs) and show why the robustness of previous
implementations is limited. We also propose an adversarial training method
specific to ECOC ensembles that allows to further improve robustness to
adaptive attacks.
Related papers
- Improving generalisation via anchor multivariate analysis [4.755199731453481]
We introduce a causal regularisation extension to anchor regression (AR) for improved out-of-distribution (OOD) generalisation.
We present anchor-compatible losses, aligning with the anchor framework to ensure robustness against distribution shifts.
We observe that simple regularisation enhances robustness in OOD settings.
arXiv Detail & Related papers (2024-03-04T09:21:10Z) - Exploring Model Learning Heterogeneity for Boosting Ensemble Robustness [17.127312781074245]
Deep neural network ensembles hold the potential of improving generalization performance for complex learning tasks.
This paper presents formal analysis and empirical evaluation of heterogeneous deep ensembles with high ensemble diversity.
arXiv Detail & Related papers (2023-10-03T17:47:25Z) - Doubly Robust Instance-Reweighted Adversarial Training [107.40683655362285]
We propose a novel doubly-robust instance reweighted adversarial framework.
Our importance weights are obtained by optimizing the KL-divergence regularized loss function.
Our proposed approach outperforms related state-of-the-art baseline methods in terms of average robust performance.
arXiv Detail & Related papers (2023-08-01T06:16:18Z) - Deep Combinatorial Aggregation [58.78692706974121]
Deep ensemble is a simple and effective method that achieves state-of-the-art results for uncertainty-aware learning tasks.
In this work, we explore a generalization of deep ensemble called deep aggregation (DCA)
DCA creates multiple instances of network components and aggregates their combinations to produce diversified model proposals and predictions.
arXiv Detail & Related papers (2022-10-12T17:35:03Z) - Improving Covariance Conditioning of the SVD Meta-layer by Orthogonality [65.67315418971688]
Nearest Orthogonal Gradient (NOG) and Optimal Learning Rate (OLR) are proposed.
Experiments on visual recognition demonstrate that our methods can simultaneously improve the covariance conditioning and generalization.
arXiv Detail & Related papers (2022-07-05T15:39:29Z) - Adversarial Vulnerability of Randomized Ensembles [12.082239973914326]
We show that randomized ensembles are more vulnerable to imperceptible adversarial perturbations than even standard AT models.
We propose a theoretically-sound and efficient adversarial attack algorithm (ARC) capable of compromising random ensembles even in cases where adaptive PGD fails to do so.
arXiv Detail & Related papers (2022-06-14T10:37:58Z) - Building Robust Ensembles via Margin Boosting [98.56381714748096]
In adversarial robustness, a single model does not usually have enough power to defend against all possible adversarial attacks.
We develop an algorithm for learning an ensemble with maximum margin.
We show that our algorithm not only outperforms existing ensembling techniques, but also large models trained in an end-to-end fashion.
arXiv Detail & Related papers (2022-06-07T14:55:58Z) - Jacobian Ensembles Improve Robustness Trade-offs to Adversarial Attacks [5.70772577110828]
We propose a novel approach, Jacobian Ensembles, to increase the robustness against UAPs.
Our results show that Jacobian Ensembles achieves previously unseen levels of accuracy and robustness.
arXiv Detail & Related papers (2022-04-19T08:04:38Z) - A Regularized Implicit Policy for Offline Reinforcement Learning [54.7427227775581]
offline reinforcement learning enables learning from a fixed dataset, without further interactions with the environment.
We propose a framework that supports learning a flexible yet well-regularized fully-implicit policy.
Experiments and ablation study on the D4RL dataset validate our framework and the effectiveness of our algorithmic designs.
arXiv Detail & Related papers (2022-02-19T20:22:04Z) - Performance Evaluation of Adversarial Attacks: Discrepancies and
Solutions [51.8695223602729]
adversarial attack methods have been developed to challenge the robustness of machine learning models.
We propose a Piece-wise Sampling Curving (PSC) toolkit to effectively address the discrepancy.
PSC toolkit offers options for balancing the computational cost and evaluation effectiveness.
arXiv Detail & Related papers (2021-04-22T14:36:51Z) - Improving Ensemble Robustness by Collaboratively Promoting and Demoting
Adversarial Robustness [19.8818435601131]
Ensemble-based adversarial training is a principled approach to achieve robustness against adversarial attacks.
We propose in this work a simple yet effective strategy to collaborate among committee models of an ensemble model.
Our proposed framework provides the flexibility to reduce the adversarial transferability as well as to promote the diversity of ensemble members.
arXiv Detail & Related papers (2020-09-21T04:54:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.