Dictionary Attack on IMU-based Gait Authentication
- URL: http://arxiv.org/abs/2309.11766v2
- Date: Sun, 31 Dec 2023 06:42:28 GMT
- Title: Dictionary Attack on IMU-based Gait Authentication
- Authors: Rajesh Kumar and Can Isik and Chilukuri K. Mohan
- Abstract summary: We present a novel adversarial model for authentication systems that use gait patterns recorded by the inertial measurement unit (IMU) built into smartphones.
The attack idea is inspired by and named after the concept of a dictionary attack on knowledge (PIN or password) based authentication systems.
- Score: 2.204806197679781
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a novel adversarial model for authentication systems that use gait
patterns recorded by the inertial measurement unit (IMU) built into
smartphones. The attack idea is inspired by and named after the concept of a
dictionary attack on knowledge (PIN or password) based authentication systems.
In particular, this work investigates whether it is possible to build a
dictionary of IMUGait patterns and use it to launch an attack or find an
imitator who can actively reproduce IMUGait patterns that match the target's
IMUGait pattern. Nine physically and demographically diverse individuals walked
at various levels of four predefined controllable and adaptable gait factors
(speed, step length, step width, and thigh-lift), producing 178 unique IMUGait
patterns. Each pattern attacked a wide variety of user authentication models.
The deeper analysis of error rates (before and after the attack) challenges the
belief that authentication systems based on IMUGait patterns are the most
difficult to spoof; further research is needed on adversarial models and
associated countermeasures.
Related papers
- MAYA: Addressing Inconsistencies in Generative Password Guessing through a Unified Benchmark [0.35998666903987897]
We introduce MAYA, a unified, customizable, plug-and-play password benchmarking framework.
MAYA provides a standardized approach for evaluating generative password-guessing models.
We find sequential models consistently outperform other generative architectures and traditional password-guessing tools.
arXiv Detail & Related papers (2025-04-23T12:16:59Z) - Time-Aware Face Anti-Spoofing with Rotation Invariant Local Binary Patterns and Deep Learning [50.79277723970418]
imitation attacks can lead to erroneous identification and subsequent authentication of attackers.
Similar to face recognition, imitation attacks can also be detected with Machine Learning.
We propose a novel approach that promises high classification accuracy by combining previously unused features with time-aware deep learning strategies.
arXiv Detail & Related papers (2024-08-27T07:26:10Z) - Model Pairing Using Embedding Translation for Backdoor Attack Detection on Open-Set Classification Tasks [63.269788236474234]
We propose to use model pairs on open-set classification tasks for detecting backdoors.
We show that this score, can be an indicator for the presence of a backdoor despite models being of different architectures.
This technique allows for the detection of backdoors on models designed for open-set classification tasks, which is little studied in the literature.
arXiv Detail & Related papers (2024-02-28T21:29:16Z) - PRAT: PRofiling Adversarial aTtacks [52.693011665938734]
We introduce a novel problem of PRofiling Adversarial aTtacks (PRAT)
Given an adversarial example, the objective of PRAT is to identify the attack used to generate it.
We use AID to devise a novel framework for the PRAT objective.
arXiv Detail & Related papers (2023-09-20T07:42:51Z) - ChatGPT as an Attack Tool: Stealthy Textual Backdoor Attack via Blackbox
Generative Model Trigger [11.622811907571132]
Textual backdoor attacks pose a practical threat to existing systems.
With cutting-edge generative models such as GPT-4 pushing rewriting to extraordinary levels, such attacks are becoming even harder to detect.
We conduct a comprehensive investigation of the role of black-box generative models as a backdoor attack tool, highlighting the importance of researching relative defense strategies.
arXiv Detail & Related papers (2023-04-27T19:26:25Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - Detecting Backdoors in Deep Text Classifiers [43.36440869257781]
We present the first robust defence mechanism that generalizes to several backdoor attacks against text classification models.
Our technique is highly accurate at defending against state-of-the-art backdoor attacks, including data poisoning and weight poisoning.
arXiv Detail & Related papers (2022-10-11T07:48:03Z) - Fact-Saboteurs: A Taxonomy of Evidence Manipulation Attacks against
Fact-Verification Systems [80.3811072650087]
We show that it is possible to subtly modify claim-salient snippets in the evidence and generate diverse and claim-aligned evidence.
The attacks are also robust against post-hoc modifications of the claim.
These attacks can have harmful implications on the inspectable and human-in-the-loop usage scenarios.
arXiv Detail & Related papers (2022-09-07T13:39:24Z) - Backdoor Attacks on Crowd Counting [63.90533357815404]
Crowd counting is a regression task that estimates the number of people in a scene image.
In this paper, we investigate the vulnerability of deep learning based crowd counting models to backdoor attacks.
arXiv Detail & Related papers (2022-07-12T16:17:01Z) - Evaluating Deep Learning Models and Adversarial Attacks on
Accelerometer-Based Gesture Authentication [6.961253535504979]
We use a deep convolutional generative adversarial network (DC-GAN) to create adversarial samples.
We show that our deep learning model is surprisingly robust to such an attack scenario.
arXiv Detail & Related papers (2021-10-03T00:15:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.