When Side-Channel Attacks Break the Black-Box Property of Embedded
Artificial Intelligence
- URL: http://arxiv.org/abs/2311.14005v1
- Date: Thu, 23 Nov 2023 13:41:22 GMT
- Title: When Side-Channel Attacks Break the Black-Box Property of Embedded
Artificial Intelligence
- Authors: Benoit Coqueret, Mathieu Carbone, Olivier Sentieys, Gabriel Zaid
- Abstract summary: deep neural networks (DNNs) are subject to malicious examples designed in a way to fool the network while being undetectable to the human observer.
We propose an architecture-agnostic attack which solve this constraint by extracting the logits.
Our method combines hardware and software attacks, by performing a side-channel attack that exploits electromagnetic leakages.
- Score: 0.8192907805418583
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial intelligence, and specifically deep neural networks (DNNs), has
rapidly emerged in the past decade as the standard for several tasks from
specific advertising to object detection. The performance offered has led DNN
algorithms to become a part of critical embedded systems, requiring both
efficiency and reliability. In particular, DNNs are subject to malicious
examples designed in a way to fool the network while being undetectable to the
human observer: the adversarial examples. While previous studies propose
frameworks to implement such attacks in black box settings, those often rely on
the hypothesis that the attacker has access to the logits of the neural
network, breaking the assumption of the traditional black box. In this paper,
we investigate a real black box scenario where the attacker has no access to
the logits. In particular, we propose an architecture-agnostic attack which
solve this constraint by extracting the logits. Our method combines hardware
and software attacks, by performing a side-channel attack that exploits
electromagnetic leakages to extract the logits for a given input, allowing an
attacker to estimate the gradients and produce state-of-the-art adversarial
examples to fool the targeted neural network. Through this example of
adversarial attack, we demonstrate the effectiveness of logits extraction using
side-channel as a first step for more general attack frameworks requiring
either the logits or the confidence scores.
Related papers
- Adv-Bot: Realistic Adversarial Botnet Attacks against Network Intrusion
Detection Systems [0.7829352305480285]
A growing number of researchers are recently investigating the feasibility of such attacks against machine learning-based security systems.
This study was to investigate the actual feasibility of adversarial attacks, specifically evasion attacks, against network-based intrusion detection systems.
Our goal is to create adversarial botnet traffic that can avoid detection while still performing all of its intended malicious functionality.
arXiv Detail & Related papers (2023-03-12T14:01:00Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - An anomaly detection approach for backdoored neural networks: face
recognition as a case study [77.92020418343022]
We propose a novel backdoored network detection method based on the principle of anomaly detection.
We test our method on a novel dataset of backdoored networks and report detectability results with perfect scores.
arXiv Detail & Related papers (2022-08-22T12:14:13Z) - An integrated Auto Encoder-Block Switching defense approach to prevent
adversarial attacks [0.0]
The vulnerability of state-of-the-art Neural Networks to adversarial input samples has increased drastically.
This article proposes a defense algorithm that utilizes the combination of an auto-encoder and block-switching architecture.
arXiv Detail & Related papers (2022-03-11T10:58:24Z) - AEVA: Black-box Backdoor Detection Using Adversarial Extreme Value
Analysis [23.184335982913325]
We address the black-box hard-label backdoor detection problem.
We show that the objective of backdoor detection is bounded by an adversarial objective.
We propose the adversarial extreme value analysis to detect backdoors in black-box neural networks.
arXiv Detail & Related papers (2021-10-28T04:36:48Z) - Subnet Replacement: Deployment-stage backdoor attack against deep neural
networks in gray-box setting [3.69409109715429]
We study the realistic potential of conducting backdoor attack against deep neural networks (DNNs) during deployment stage.
We propose Subnet Replacement Attack (SRA), which is capable of embedding backdoor into DNNs by directly modifying a limited number of model parameters.
arXiv Detail & Related papers (2021-07-15T10:47:13Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Black-box Detection of Backdoor Attacks with Limited Information and
Data [56.0735480850555]
We propose a black-box backdoor detection (B3D) method to identify backdoor attacks with only query access to the model.
In addition to backdoor detection, we also propose a simple strategy for reliable predictions using the identified backdoored models.
arXiv Detail & Related papers (2021-03-24T12:06:40Z) - Improving Query Efficiency of Black-box Adversarial Attack [75.71530208862319]
We propose a Neural Process based black-box adversarial attack (NP-Attack)
NP-Attack could greatly decrease the query counts under the black-box setting.
arXiv Detail & Related papers (2020-09-24T06:22:56Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.