Improving Ensemble Robustness by Collaboratively Promoting and Demoting
Adversarial Robustness
- URL: http://arxiv.org/abs/2009.09612v2
- Date: Fri, 4 Feb 2022 19:59:48 GMT
- Title: Improving Ensemble Robustness by Collaboratively Promoting and Demoting
Adversarial Robustness
- Authors: Anh Bui, Trung Le, He Zhao, Paul Montague, Olivier deVel, Tamas
Abraham, Dinh Phung
- Abstract summary: Ensemble-based adversarial training is a principled approach to achieve robustness against adversarial attacks.
We propose in this work a simple yet effective strategy to collaborate among committee models of an ensemble model.
Our proposed framework provides the flexibility to reduce the adversarial transferability as well as to promote the diversity of ensemble members.
- Score: 19.8818435601131
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ensemble-based adversarial training is a principled approach to achieve
robustness against adversarial attacks. An important technique of this approach
is to control the transferability of adversarial examples among ensemble
members. We propose in this work a simple yet effective strategy to collaborate
among committee models of an ensemble model. This is achieved via the secure
and insecure sets defined for each model member on a given sample, hence help
us to quantify and regularize the transferability. Consequently, our proposed
framework provides the flexibility to reduce the adversarial transferability as
well as to promote the diversity of ensemble members, which are two crucial
factors for better robustness in our ensemble approach. We conduct extensive
and comprehensive experiments to demonstrate that our proposed method
outperforms the state-of-the-art ensemble baselines, at the same time can
detect a wide range of adversarial examples with a nearly perfect accuracy. Our
code is available at:
https://github.com/tuananhbui89/Crossing-Collaborative-Ensemble.
Related papers
- AgentVerse: Facilitating Multi-Agent Collaboration and Exploring
Emergent Behaviors [93.38830440346783]
We propose a multi-agent framework framework that can collaboratively adjust its composition as a greater-than-the-sum-of-its-parts system.
Our experiments demonstrate that framework framework can effectively deploy multi-agent groups that outperform a single agent.
In view of these behaviors, we discuss some possible strategies to leverage positive ones and mitigate negative ones for improving the collaborative potential of multi-agent groups.
arXiv Detail & Related papers (2023-08-21T16:47:11Z) - Improved Robustness Against Adaptive Attacks With Ensembles and
Error-Correcting Output Codes [0.0]
We investigate the robustness of Error-Correcting Output Codes (ECOC) ensembles through architectural improvements and ensemble diversity promotion.
We perform a comprehensive robustness assessment against adaptive attacks and investigate the relationship between ensemble diversity and robustness.
arXiv Detail & Related papers (2023-03-04T05:05:17Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - Randomized Kaczmarz in Adversarial Distributed Setting [15.23454580321625]
We propose an iterative approach that is adversary-tolerant for convex optimization problems.
Our method ensures convergence and is capable of adapting to adversarial distributions.
arXiv Detail & Related papers (2023-02-24T01:26:56Z) - Learning Transferable Adversarial Robust Representations via Multi-view
Consistency [57.73073964318167]
We propose a novel meta-adversarial multi-view representation learning framework with dual encoders.
We demonstrate the effectiveness of our framework on few-shot learning tasks from unseen domains.
arXiv Detail & Related papers (2022-10-19T11:48:01Z) - Resisting Adversarial Attacks in Deep Neural Networks using Diverse
Decision Boundaries [12.312877365123267]
Deep learning systems are vulnerable to crafted adversarial examples, which may be imperceptible to the human eye, but can lead the model to misclassify.
We develop a new ensemble-based solution that constructs defender models with diverse decision boundaries with respect to the original model.
We present extensive experimentations using standard image classification datasets, namely MNIST, CIFAR-10 and CIFAR-100 against state-of-the-art adversarial attacks.
arXiv Detail & Related papers (2022-08-18T08:19:26Z) - Revisiting GANs by Best-Response Constraint: Perspective, Methodology,
and Application [49.66088514485446]
Best-Response Constraint (BRC) is a general learning framework to explicitly formulate the potential dependency of the generator on the discriminator.
We show that even with different motivations and formulations, a variety of existing GANs ALL can be uniformly improved by our flexible BRC methodology.
arXiv Detail & Related papers (2022-05-20T12:42:41Z) - A Regularized Implicit Policy for Offline Reinforcement Learning [54.7427227775581]
offline reinforcement learning enables learning from a fixed dataset, without further interactions with the environment.
We propose a framework that supports learning a flexible yet well-regularized fully-implicit policy.
Experiments and ablation study on the D4RL dataset validate our framework and the effectiveness of our algorithmic designs.
arXiv Detail & Related papers (2022-02-19T20:22:04Z) - Learning Diverse Representations for Fast Adaptation to Distribution
Shift [78.83747601814669]
We present a method for learning multiple models, incorporating an objective that pressures each to learn a distinct way to solve the task.
We demonstrate our framework's ability to facilitate rapid adaptation to distribution shift.
arXiv Detail & Related papers (2020-06-12T12:23:50Z) - Evaluating Ensemble Robustness Against Adversarial Attacks [0.0]
Adversarial examples, which are slightly perturbed inputs generated with the aim of fooling a neural network, are known to transfer between models.
This concept of transferability poses grave security concerns as it leads to the possibility of attacking models in a black box setting.
We introduce a gradient based measure of how effectively an ensemble's constituent models collaborate to reduce the space of adversarial examples targeting the ensemble itself.
arXiv Detail & Related papers (2020-05-12T13:20:54Z) - Certifying Joint Adversarial Robustness for Model Ensembles [10.203602318836445]
Deep Neural Networks (DNNs) are often vulnerable to adversarial examples.
A proposed defense deploys an ensemble of models with the hope that, although the individual models may be vulnerable, an adversary will not be able to find an adversarial example that succeeds against the ensemble.
We consider the joint vulnerability of an ensemble of models, and propose a novel technique for certifying the joint robustness of ensembles.
arXiv Detail & Related papers (2020-04-21T19:38:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.