Adversarial Attacks on Deep Learning Systems for User Identification
based on Motion Sensors
- URL: http://arxiv.org/abs/2009.01109v2
- Date: Thu, 5 Nov 2020 20:11:32 GMT
- Title: Adversarial Attacks on Deep Learning Systems for User Identification
based on Motion Sensors
- Authors: Cezara Benegui, Radu Tudor Ionescu
- Abstract summary: This study focuses on deep learning methods for explicit authentication based on motion sensor signals.
In this scenario, attackers could craft adversarial examples with the aim of gaining unauthorized access.
To our knowledge, this is the first study that aims at quantifying the impact of adversarial attacks on machine learning models.
- Score: 24.182791316595576
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For the time being, mobile devices employ implicit authentication mechanisms,
namely, unlock patterns, PINs or biometric-based systems such as fingerprint or
face recognition. While these systems are prone to well-known attacks, the
introduction of an explicit and unobtrusive authentication layer can greatly
enhance security. In this study, we focus on deep learning methods for explicit
authentication based on motion sensor signals. In this scenario, attackers
could craft adversarial examples with the aim of gaining unauthorized access
and even restraining a legitimate user to access his mobile device. To our
knowledge, this is the first study that aims at quantifying the impact of
adversarial attacks on machine learning models used for user identification
based on motion sensors. To accomplish our goal, we study multiple methods for
generating adversarial examples. We propose three research questions regarding
the impact and the universality of adversarial examples, conducting relevant
experiments in order to answer our research questions. Our empirical results
demonstrate that certain adversarial example generation methods are specific to
the attacked classification model, while others tend to be generic. We thus
conclude that deep neural networks trained for user identification tasks based
on motion sensors are subject to a high percentage of misclassification when
given adversarial input.
Related papers
- Time-Aware Face Anti-Spoofing with Rotation Invariant Local Binary Patterns and Deep Learning [50.79277723970418]
imitation attacks can lead to erroneous identification and subsequent authentication of attackers.
Similar to face recognition, imitation attacks can also be detected with Machine Learning.
We propose a novel approach that promises high classification accuracy by combining previously unused features with time-aware deep learning strategies.
arXiv Detail & Related papers (2024-08-27T07:26:10Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - A Human-in-the-Middle Attack against Object Detection Systems [4.764637544913963]
We propose a novel hardware attack inspired by Man-in-the-Middle attacks in cryptography.
This attack generates a Universal Adversarial Perturbations (UAP) and injects the perturbation between the USB camera and the detection system.
These findings raise serious concerns for applications of deep learning models in safety-critical systems, such as autonomous driving.
arXiv Detail & Related papers (2022-08-15T13:21:41Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - RobustSense: Defending Adversarial Attack for Secure Device-Free Human
Activity Recognition [37.387265457439476]
We propose a novel learning framework, RobustSense, to defend common adversarial attacks.
Our method works well on wireless human activity recognition and person identification systems.
arXiv Detail & Related papers (2022-04-04T15:06:03Z) - Detect & Reject for Transferability of Black-box Adversarial Attacks
Against Network Intrusion Detection Systems [0.0]
We investigate the transferability of adversarial network traffic against machine learning-based intrusion detection systems.
We examine Detect & Reject as a defensive mechanism to limit the effect of the transferability property of adversarial network traffic against machine learning-based intrusion detection systems.
arXiv Detail & Related papers (2021-12-22T17:54:54Z) - Adversarial Robustness of Deep Reinforcement Learning based Dynamic
Recommender Systems [50.758281304737444]
We propose to explore adversarial examples and attack detection on reinforcement learning-based interactive recommendation systems.
We first craft different types of adversarial examples by adding perturbations to the input and intervening on the casual factors.
Then, we augment recommendation systems by detecting potential attacks with a deep learning-based classifier based on the crafted data.
arXiv Detail & Related papers (2021-12-02T04:12:24Z) - Towards A Conceptually Simple Defensive Approach for Few-shot
classifiers Against Adversarial Support Samples [107.38834819682315]
We study a conceptually simple approach to defend few-shot classifiers against adversarial attacks.
We propose a simple attack-agnostic detection method, using the concept of self-similarity and filtering.
Our evaluation on the miniImagenet (MI) and CUB datasets exhibit good attack detection performance.
arXiv Detail & Related papers (2021-10-24T05:46:03Z) - Evaluating Deep Learning Models and Adversarial Attacks on
Accelerometer-Based Gesture Authentication [6.961253535504979]
We use a deep convolutional generative adversarial network (DC-GAN) to create adversarial samples.
We show that our deep learning model is surprisingly robust to such an attack scenario.
arXiv Detail & Related papers (2021-10-03T00:15:50Z) - Adversarially Robust One-class Novelty Detection [83.1570537254877]
We show that existing novelty detectors are susceptible to adversarial examples.
We propose a defense strategy that manipulates the latent space of novelty detectors to improve the robustness against adversarial examples.
arXiv Detail & Related papers (2021-08-25T10:41:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.