Against All Odds: Winning the Defense Challenge in an Evasion
Competition with Diversification
- URL: http://arxiv.org/abs/2010.09569v1
- Date: Mon, 19 Oct 2020 14:53:06 GMT
- Title: Against All Odds: Winning the Defense Challenge in an Evasion
Competition with Diversification
- Authors: Erwin Quiring, Lukas Pirch, Michael Reimsbach, Daniel Arp, Konrad
Rieck
- Abstract summary: In this paper, we outline our learning-based system PEberus that got the first place in the defender challenge of the Microsoft Evasion Competition.
Our system combines multiple, diverse defenses: we address the semantic gap, use various classification models, and apply a stateful defense.
- Score: 13.236009846517662
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning-based systems for malware detection operate in a hostile
environment. Consequently, adversaries will also target the learning system and
use evasion attacks to bypass the detection of malware. In this paper, we
outline our learning-based system PEberus that got the first place in the
defender challenge of the Microsoft Evasion Competition, resisting a variety of
attacks from independent attackers. Our system combines multiple, diverse
defenses: we address the semantic gap, use various classification models, and
apply a stateful defense. This competition gives us the unique opportunity to
examine evasion attacks under a realistic scenario. It also highlights that
existing machine learning methods can be hardened against attacks by thoroughly
analyzing the attack surface and implementing concepts from adversarial
learning. Our defense can serve as an additional baseline in the future to
strengthen the research on secure learning.
Related papers
- A Novel Approach to Guard from Adversarial Attacks using Stable Diffusion [0.0]
Our proposal suggests a different approach to the AI Guardian framework.
Instead of including adversarial examples in the training process, we propose training the AI system without them.
This aims to create a system that is inherently resilient to a wider range of attacks.
arXiv Detail & Related papers (2024-05-03T04:08:15Z) - On the Difficulty of Defending Contrastive Learning against Backdoor
Attacks [58.824074124014224]
We show how contrastive backdoor attacks operate through distinctive mechanisms.
Our findings highlight the need for defenses tailored to the specificities of contrastive backdoor attacks.
arXiv Detail & Related papers (2023-12-14T15:54:52Z) - BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
Learning [85.2564206440109]
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses.
We introduce the emphtoolns attack, which is resistant to backdoor detection and model fine-tuning defenses.
arXiv Detail & Related papers (2023-11-20T02:21:49Z) - Game Theoretic Mixed Experts for Combinational Adversarial Machine
Learning [10.368343314144553]
We provide a game-theoretic framework for ensemble adversarial attacks and defenses.
We propose three new attack algorithms, specifically designed to target defenses with randomized transformations, multi-model voting schemes, and adversarial detector architectures.
arXiv Detail & Related papers (2022-11-26T21:35:01Z) - Ares: A System-Oriented Wargame Framework for Adversarial ML [3.197282271064602]
Ares is an evaluation framework for adversarial ML that allows researchers to explore attacks and defenses in a realistic wargame-like environment.
Ares frames the conflict between the attacker and defender as two agents in a reinforcement learning environment with opposing objectives.
This allows the introduction of system-level evaluation metrics such as time to failure and evaluation of complex strategies.
arXiv Detail & Related papers (2022-10-24T04:55:18Z) - Survey on Federated Learning Threats: concepts, taxonomy on attacks and
defences, experimental study and challenges [10.177219272933781]
Federated learning is a machine learning paradigm that emerges as a solution to the privacy-preservation demands in artificial intelligence.
As machine learning, federated learning is threatened by adversarial attacks against the integrity of the learning model and the privacy of data via a distributed approach to tackle local and global learning.
arXiv Detail & Related papers (2022-01-20T12:23:03Z) - Adversarial Robustness of Deep Reinforcement Learning based Dynamic
Recommender Systems [50.758281304737444]
We propose to explore adversarial examples and attack detection on reinforcement learning-based interactive recommendation systems.
We first craft different types of adversarial examples by adding perturbations to the input and intervening on the casual factors.
Then, we augment recommendation systems by detecting potential attacks with a deep learning-based classifier based on the crafted data.
arXiv Detail & Related papers (2021-12-02T04:12:24Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Adversarial defense for automatic speaker verification by cascaded
self-supervised learning models [101.42920161993455]
More and more malicious attackers attempt to launch adversarial attacks at automatic speaker verification (ASV) systems.
We propose a standard and attack-agnostic method based on cascaded self-supervised learning models to purify the adversarial perturbations.
Experimental results demonstrate that the proposed method achieves effective defense performance and can successfully counter adversarial attacks.
arXiv Detail & Related papers (2021-02-14T01:56:43Z) - Block Switching: A Stochastic Approach for Deep Learning Security [75.92824098268471]
Recent study of adversarial attacks has revealed the vulnerability of modern deep learning models.
In this paper, we introduce Block Switching (BS), a defense strategy against adversarial attacks based on onity.
arXiv Detail & Related papers (2020-02-18T23:14:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.