Evaluating Deep Learning Models and Adversarial Attacks on
Accelerometer-Based Gesture Authentication
- URL: http://arxiv.org/abs/2110.14597v1
- Date: Sun, 3 Oct 2021 00:15:50 GMT
- Title: Evaluating Deep Learning Models and Adversarial Attacks on
Accelerometer-Based Gesture Authentication
- Authors: Elliu Huang and Fabio Di Troia and Mark Stamp
- Abstract summary: We use a deep convolutional generative adversarial network (DC-GAN) to create adversarial samples.
We show that our deep learning model is surprisingly robust to such an attack scenario.
- Score: 6.961253535504979
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Gesture-based authentication has emerged as a non-intrusive, effective means
of authenticating users on mobile devices. Typically, such authentication
techniques have relied on classical machine learning techniques, but recently,
deep learning techniques have been applied this problem. Although prior
research has shown that deep learning models are vulnerable to adversarial
attacks, relatively little research has been done in the adversarial domain for
behavioral biometrics. In this research, we collect tri-axial accelerometer
gesture data (TAGD) from 46 users and perform classification experiments with
both classical machine learning and deep learning models. Specifically, we
train and test support vector machines (SVM) and convolutional neural networks
(CNN). We then consider a realistic adversarial attack, where we assume the
attacker has access to real users' TAGD data, but not the authentication model.
We use a deep convolutional generative adversarial network (DC-GAN) to create
adversarial samples, and we show that our deep learning model is surprisingly
robust to such an attack scenario.
Related papers
- Undermining Image and Text Classification Algorithms Using Adversarial Attacks [0.0]
Our study addresses the gap by training various machine learning models and using GANs and SMOTE to generate additional data points aimed at attacking text classification models.
Our experiments reveal a significant vulnerability in classification models. Specifically, we observe a 20 % decrease in accuracy for the top-performing text classification models post-attack, along with a 30 % decrease in facial recognition accuracy.
arXiv Detail & Related papers (2024-11-03T18:44:28Z) - Time-Aware Face Anti-Spoofing with Rotation Invariant Local Binary Patterns and Deep Learning [50.79277723970418]
imitation attacks can lead to erroneous identification and subsequent authentication of attackers.
Similar to face recognition, imitation attacks can also be detected with Machine Learning.
We propose a novel approach that promises high classification accuracy by combining previously unused features with time-aware deep learning strategies.
arXiv Detail & Related papers (2024-08-27T07:26:10Z) - Adversarial Attacks and Dimensionality in Text Classifiers [3.4179091429029382]
Adversarial attacks on machine learning algorithms have been a key deterrent to the adoption of AI in many real-world use cases.
We study adversarial examples in the field of natural language processing, specifically text classification tasks.
arXiv Detail & Related papers (2024-04-03T11:49:43Z) - Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection
Capability [70.72426887518517]
Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications.
We propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.
Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them.
arXiv Detail & Related papers (2023-06-06T14:23:34Z) - Can Adversarial Examples Be Parsed to Reveal Victim Model Information? [62.814751479749695]
In this work, we ask whether it is possible to infer data-agnostic victim model (VM) information from data-specific adversarial instances.
We collect a dataset of adversarial attacks across 7 attack types generated from 135 victim models.
We show that a simple, supervised model parsing network (MPN) is able to infer VM attributes from unseen adversarial attacks.
arXiv Detail & Related papers (2023-03-13T21:21:49Z) - Metamorphic Testing-based Adversarial Attack to Fool Deepfake Detectors [2.0649235321315285]
There is a dire need for deepfake detection technology to help spot deepfake media.
Current deepfake detection models are able to achieve outstanding accuracy (>90%)
This study identifies makeup application as an adversarial attack that could fool deepfake detectors.
arXiv Detail & Related papers (2022-04-19T02:24:30Z) - Neurosymbolic hybrid approach to driver collision warning [64.02492460600905]
There are two main algorithmic approaches to autonomous driving systems.
Deep learning alone has achieved state-of-the-art results in many areas.
But sometimes it can be very difficult to debug if the deep learning model doesn't work.
arXiv Detail & Related papers (2022-03-28T20:29:50Z) - An Empirical Review of Adversarial Defenses [0.913755431537592]
Deep neural networks, which form the basis of such systems, are highly susceptible to a specific type of attack, called adversarial attacks.
A hacker can, even with bare minimum computation, generate adversarial examples (images or data points that belong to another class, but consistently fool the model to get misclassified as genuine) and crumble the basis of such algorithms.
We show two effective techniques, namely Dropout and Denoising Autoencoders, and show their success in preventing such attacks from fooling the model.
arXiv Detail & Related papers (2020-12-10T09:34:41Z) - Adversarial Attacks on Deep Learning Systems for User Identification
based on Motion Sensors [24.182791316595576]
This study focuses on deep learning methods for explicit authentication based on motion sensor signals.
In this scenario, attackers could craft adversarial examples with the aim of gaining unauthorized access.
To our knowledge, this is the first study that aims at quantifying the impact of adversarial attacks on machine learning models.
arXiv Detail & Related papers (2020-09-02T14:35:05Z) - Anomaly Detection-Based Unknown Face Presentation Attack Detection [74.4918294453537]
Anomaly detection-based spoof attack detection is a recent development in face Presentation Attack Detection.
In this paper, we present a deep-learning solution for anomaly detection-based spoof attack detection.
The proposed approach benefits from the representation learning power of the CNNs and learns better features for fPAD task.
arXiv Detail & Related papers (2020-07-11T21:20:55Z) - Adversarial Self-Supervised Contrastive Learning [62.17538130778111]
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions.
We propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples.
We present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data.
arXiv Detail & Related papers (2020-06-13T08:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.