RAILS: A Robust Adversarial Immune-inspired Learning System
- URL: http://arxiv.org/abs/2012.10485v1
- Date: Fri, 18 Dec 2020 19:47:12 GMT
- Title: RAILS: A Robust Adversarial Immune-inspired Learning System
- Authors: Ren Wang, Tianqi Chen, Stephen Lindsly, Alnawaz Rehemtulla, Alfred
Hero, Indika Rajapakse
- Abstract summary: We propose a new adversarial defense framework, called the Robust Adversarial Immune-inspired Learning System (RAILS)
RAILS incorporates an Adaptive Immune System Emulation (AISE), which emulates in silico the biological mechanisms that are used to defend the host against attacks by pathogens.
We show that the RAILS learning curve exhibits similar diversity-selection learning phases as observed in our in vitro biological experiments.
- Score: 15.653578249331982
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial attacks against deep neural networks are continuously evolving.
Without effective defenses, they can lead to catastrophic failure. The
long-standing and arguably most powerful natural defense system is the
mammalian immune system, which has successfully defended against attacks by
novel pathogens for millions of years. In this paper, we propose a new
adversarial defense framework, called the Robust Adversarial Immune-inspired
Learning System (RAILS). RAILS incorporates an Adaptive Immune System Emulation
(AISE), which emulates in silico the biological mechanisms that are used to
defend the host against attacks by pathogens. We use RAILS to harden Deep
k-Nearest Neighbor (DkNN) architectures against evasion attacks. Evolutionary
programming is used to simulate processes in the natural immune system: B-cell
flocking, clonal expansion, and affinity maturation. We show that the RAILS
learning curve exhibits similar diversity-selection learning phases as observed
in our in vitro biological experiments. When applied to adversarial image
classification on three different datasets, RAILS delivers an additional
5.62%/12.56%/4.74% robustness improvement as compared to applying DkNN alone,
without appreciable loss of accuracy on clean data.
Related papers
- Opponent Shaping for Antibody Development [49.26728828005039]
Anti-viral therapies are typically designed to target only the current strains of a virus.
therapy-induced selective pressures act on viruses to drive the emergence of mutated strains, against which initial therapies have reduced efficacy.
We build on a computational model of binding between antibodies and viral antigens to implement a genetic simulation of viral evolutionary escape.
arXiv Detail & Related papers (2024-09-16T14:56:27Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Immune Defense: A Novel Adversarial Defense Mechanism for Preventing the
Generation of Adversarial Examples [32.649613813876954]
The vulnerability of Deep Neural Networks (DNNs) to adversarial examples has been confirmed.
We propose a novel adversarial defense mechanism, which is referred to as immune defense.
This mechanism applies carefully designed quasi-imperceptible perturbations to the raw images to prevent the generation of adversarial examples.
arXiv Detail & Related papers (2023-03-08T10:47:17Z) - Graph Adversarial Immunization for Certifiable Robustness [63.58739705845775]
Graph neural networks (GNNs) are vulnerable to adversarial attacks.
Existing defenses focus on developing adversarial training or model modification.
We propose and formulate graph adversarial immunization, i.e., vaccinating part of graph structure.
arXiv Detail & Related papers (2023-02-16T03:18:43Z) - Adversarial Defense via Neural Oscillation inspired Gradient Masking [0.0]
Spiking neural networks (SNNs) attract great attention due to their low power consumption, low latency, and biological plausibility.
We propose a novel neural model that incorporates the bio-inspired oscillation mechanism to enhance the security of SNNs.
arXiv Detail & Related papers (2022-11-04T02:13:19Z) - Searching for an Effective Defender: Benchmarking Defense against
Adversarial Word Substitution [83.84968082791444]
Deep neural networks are vulnerable to intentionally crafted adversarial examples.
Various methods have been proposed to defend against adversarial word-substitution attacks for neural NLP models.
arXiv Detail & Related papers (2021-08-29T08:11:36Z) - RAILS: A Robust Adversarial Immune-inspired Learning System [14.772880825645819]
We develop a novel adversarial defense framework inspired by the adaptive immune system: the Robust Adversarial Immune-inspired Learning System (RAILS)
RAILS displays a tradeoff between robustness (diversity) and accuracy (specificity)
For the PGD attack, RAILS is found to improve the robustness over existing methods by >= 5.62%, 12.5% and 10.32%, respectively, without appreciable loss of standard accuracy.
arXiv Detail & Related papers (2021-06-27T17:57:45Z) - Immuno-mimetic Deep Neural Networks (Immuno-Net) [15.653578249331982]
We introduce a new type of biomimetic model, one that borrows concepts from the immune system.
This immuno-mimetic model leads to a new computational biology framework for robustification of deep neural networks.
We show that Immuno-net RAILS results in improvement of as much as 12.5% in adversarial accuracy of a baseline method.
arXiv Detail & Related papers (2021-06-27T16:45:23Z) - Adversarial defense for automatic speaker verification by cascaded
self-supervised learning models [101.42920161993455]
More and more malicious attackers attempt to launch adversarial attacks at automatic speaker verification (ASV) systems.
We propose a standard and attack-agnostic method based on cascaded self-supervised learning models to purify the adversarial perturbations.
Experimental results demonstrate that the proposed method achieves effective defense performance and can successfully counter adversarial attacks.
arXiv Detail & Related papers (2021-02-14T01:56:43Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.