Attack Tree Analysis for Adversarial Evasion Attacks
- URL: http://arxiv.org/abs/2312.16957v1
- Date: Thu, 28 Dec 2023 11:02:37 GMT
- Title: Attack Tree Analysis for Adversarial Evasion Attacks
- Authors: Yuki Yamaguchi and Toshiaki Aoki
- Abstract summary: It is necessary to analyze the risk of ML-specific attacks in introducing ML base systems.
In this study, we propose a quantitative evaluation method for analyzing the risk of evasion attacks using attack trees.
- Score: 1.0442919217572477
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, the evolution of deep learning has promoted the application of
machine learning (ML) to various systems. However, there are ML systems, such
as autonomous vehicles, that cause critical damage when they misclassify.
Conversely, there are ML-specific attacks called adversarial attacks based on
the characteristics of ML systems. For example, one type of adversarial attack
is an evasion attack, which uses minute perturbations called "adversarial
examples" to intentionally misclassify classifiers. Therefore, it is necessary
to analyze the risk of ML-specific attacks in introducing ML base systems. In
this study, we propose a quantitative evaluation method for analyzing the risk
of evasion attacks using attack trees. The proposed method consists of the
extension of the conventional attack tree to analyze evasion attacks and the
systematic construction method of the extension. In the extension of the
conventional attack tree, we introduce ML and conventional attack nodes to
represent various characteristics of evasion attacks. In the systematic
construction process, we propose a procedure to construct the attack tree. The
procedure consists of three steps: (1) organizing information about attack
methods in the literature to a matrix, (2) identifying evasion attack scenarios
from methods in the matrix, and (3) constructing the attack tree from the
identified scenarios using a pattern. Finally, we conducted experiments on
three ML image recognition systems to demonstrate the versatility and
effectiveness of our proposed method.
Related papers
- AutoJailbreak: Exploring Jailbreak Attacks and Defenses through a Dependency Lens [83.08119913279488]
We present a systematic analysis of the dependency relationships in jailbreak attack and defense techniques.
We propose three comprehensive, automated, and logical frameworks.
We show that the proposed ensemble jailbreak attack and defense framework significantly outperforms existing research.
arXiv Detail & Related papers (2024-06-06T07:24:41Z) - No Two Devils Alike: Unveiling Distinct Mechanisms of Fine-tuning Attacks [13.610008743851157]
We analyze the two most representative types of attack approaches: Explicit Harmful Attack (EHA) and Identity-Shifting Attack (ISA)
Unlike ISA, EHA tends to aggressively target the harmful recognition stage. While both EHA and ISA disrupt the latter two stages, the extent and mechanisms of their attacks differ significantly.
arXiv Detail & Related papers (2024-05-25T13:38:40Z) - Defense Against Model Extraction Attacks on Recommender Systems [53.127820987326295]
We introduce Gradient-based Ranking Optimization (GRO) to defend against model extraction attacks on recommender systems.
GRO aims to minimize the loss of the protected target model while maximizing the loss of the attacker's surrogate model.
Results show GRO's superior effectiveness in defending against model extraction attacks.
arXiv Detail & Related papers (2023-10-25T03:30:42Z) - Can Adversarial Examples Be Parsed to Reveal Victim Model Information? [62.814751479749695]
In this work, we ask whether it is possible to infer data-agnostic victim model (VM) information from data-specific adversarial instances.
We collect a dataset of adversarial attacks across 7 attack types generated from 135 victim models.
We show that a simple, supervised model parsing network (MPN) is able to infer VM attributes from unseen adversarial attacks.
arXiv Detail & Related papers (2023-03-13T21:21:49Z) - Attacks in Adversarial Machine Learning: A Systematic Survey from the
Life-cycle Perspective [69.25513235556635]
Adversarial machine learning (AML) studies the adversarial phenomenon of machine learning, which may make inconsistent or unexpected predictions with humans.
Some paradigms have been recently developed to explore this adversarial phenomenon occurring at different stages of a machine learning system.
We propose a unified mathematical framework to covering existing attack paradigms.
arXiv Detail & Related papers (2023-02-19T02:12:21Z) - Understanding the Vulnerability of Skeleton-based Human Activity Recognition via Black-box Attack [53.032801921915436]
Human Activity Recognition (HAR) has been employed in a wide range of applications, e.g. self-driving cars.
Recently, the robustness of skeleton-based HAR methods have been questioned due to their vulnerability to adversarial attacks.
We show such threats exist, even when the attacker only has access to the input/output of the model.
We propose the very first black-box adversarial attack approach in skeleton-based HAR called BASAR.
arXiv Detail & Related papers (2022-11-21T09:51:28Z) - Threat Detection for General Social Engineering Attack Using Machine
Learning Techniques [7.553860996595933]
This paper explores the threat detection for general Social Engineering (SE) attack using Machine Learning (ML) techniques.
The experimental results and analyses show that: 1) the ML techniques are feasible in detecting general SE attacks and some ML models are quite effective; ML-based SE threat detection is complementary with KG-based approaches.
arXiv Detail & Related papers (2022-03-15T14:18:22Z) - Zero-shot learning approach to adaptive Cybersecurity using Explainable
AI [0.5076419064097734]
We present a novel approach to handle the alarm flooding problem faced by Cybersecurity systems like security information and event management (SIEM) and intrusion detection (IDS)
We apply a zero-shot learning method to machine learning (ML) by leveraging explanations for predictions of anomalies generated by a ML model.
In this approach, without any prior knowledge of attack, we try to identify it, decipher the features that contribute to classification and try to bucketize the attack in a specific category.
arXiv Detail & Related papers (2021-06-21T06:29:13Z) - Adversarial Attack Attribution: Discovering Attributable Signals in
Adversarial ML Attacks [0.7883722807601676]
Even production systems, such as self-driving cars and ML-as-a-service offerings, are susceptible to adversarial inputs.
Can perturbed inputs be attributed to the methods used to generate the attack?
We introduce the concept of adversarial attack attribution and create a simple supervised learning experimental framework to examine the feasibility of discovering attributable signals in adversarial attacks.
arXiv Detail & Related papers (2021-01-08T08:16:41Z) - Composite Adversarial Attacks [57.293211764569996]
Adversarial attack is a technique for deceiving Machine Learning (ML) models.
In this paper, a new procedure called Composite Adrial Attack (CAA) is proposed for automatically searching the best combination of attack algorithms.
CAA beats 10 top attackers on 11 diverse defenses with less elapsed time.
arXiv Detail & Related papers (2020-12-10T03:21:16Z) - Evaluating and Improving Adversarial Robustness of Machine
Learning-Based Network Intrusion Detectors [21.86766733460335]
We study the first systematic study of the gray/black-box traffic-space adversarial attacks to evaluate the robustness of ML-based NIDSs.
Our work outperforms previous ones in the following aspects.
We also propose a defense scheme against adversarial attacks to improve system robustness.
arXiv Detail & Related papers (2020-05-15T13:06:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.