A Framework for Behavioral Biometric Authentication using Deep Metric
Learning on Mobile Devices
- URL: http://arxiv.org/abs/2005.12901v2
- Date: Mon, 17 Aug 2020 16:39:08 GMT
- Title: A Framework for Behavioral Biometric Authentication using Deep Metric
Learning on Mobile Devices
- Authors: Cong Wang, Yanru Xiao, Xing Gao, Li Li, Jun Wang
- Abstract summary: We present a new framework to incorporate training on battery-powered mobile devices, so private data never leaves the device and training can be flexibly scheduled to adapt the behavioral patterns at runtime.
Experiments demonstrate authentication accuracy over 95% on three public datasets, a sheer 15% gain from multi-class classification with less data and robustness against brute-force and side-channel attacks with 99% and 90% success, respectively.
Our results indicate that training consumes lower energy than watching videos and slightly higher energy than playing games.
- Score: 17.905483523678964
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mobile authentication using behavioral biometrics has been an active area of
research. Existing research relies on building machine learning classifiers to
recognize an individual's unique patterns. However, these classifiers are not
powerful enough to learn the discriminative features. When implemented on the
mobile devices, they face new challenges from the behavioral dynamics, data
privacy and side-channel leaks. To address these challenges, we present a new
framework to incorporate training on battery-powered mobile devices, so private
data never leaves the device and training can be flexibly scheduled to adapt
the behavioral patterns at runtime. We re-formulate the classification problem
into deep metric learning to improve the discriminative power and design an
effective countermeasure to thwart side-channel leaks by embedding a noise
signature in the sensing signals without sacrificing too much usability. The
experiments demonstrate authentication accuracy over 95% on three public
datasets, a sheer 15% gain from multi-class classification with less data and
robustness against brute-force and side-channel attacks with 99% and 90%
success, respectively. We show the feasibility of training with mobile CPUs,
where training 100 epochs takes less than 10 mins and can be boosted 3-5 times
with feature transfer. Finally, we profile memory, energy and computational
overhead. Our results indicate that training consumes lower energy than
watching videos and slightly higher energy than playing games.
Related papers
- Segue: Side-information Guided Generative Unlearnable Examples for
Facial Privacy Protection in Real World [64.4289385463226]
We propose Segue: Side-information guided generative unlearnable examples.
To improve transferability, we introduce side information such as true labels and pseudo labels.
It can resist JPEG compression, adversarial training, and some standard data augmentations.
arXiv Detail & Related papers (2023-10-24T06:22:37Z) - Your Identity is Your Behavior -- Continuous User Authentication based
on Machine Learning and Touch Dynamics [0.0]
This research used a dataset of touch dynamics collected from 40 subjects using the LG V30+.
The participants played four mobile games, Diep.io, Slither, and Minecraft, for 10 minutes each game.
The results of the research showed that all three algorithms were able to effectively classify users based on their individual touch dynamics.
arXiv Detail & Related papers (2023-04-24T13:45:25Z) - Peeling the Onion: Hierarchical Reduction of Data Redundancy for
Efficient Vision Transformer Training [110.79400526706081]
Vision transformers (ViTs) have recently obtained success in many applications, but their intensive computation and heavy memory usage limit their generalization.
Previous compression algorithms usually start from the pre-trained dense models and only focus on efficient inference.
This paper proposes an end-to-end efficient training framework from three sparse perspectives, dubbed Tri-Level E-ViT.
arXiv Detail & Related papers (2022-11-19T21:15:47Z) - Non-Contrastive Learning-based Behavioural Biometrics for Smart IoT
Devices [0.9005431161010408]
Behaviour biometrics are being explored as a viable alternative to overcome the limitations of traditional authentication methods.
Recent behavioural biometric solutions use deep learning models that require large amounts of annotated training data.
We propose using SimSiam-based non-contrastive self-supervised learning to improve the label efficiency of behavioural biometric systems.
arXiv Detail & Related papers (2022-10-24T05:56:32Z) - RelaxLoss: Defending Membership Inference Attacks without Losing Utility [68.48117818874155]
We propose a novel training framework based on a relaxed loss with a more achievable learning target.
RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead.
Our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs.
arXiv Detail & Related papers (2022-07-12T19:34:47Z) - Smart App Attack: Hacking Deep Learning Models in Android Apps [16.663345577900813]
We introduce a grey-box adversarial attack framework to hack on-device models.
We evaluate the attack effectiveness and generality in terms of four different settings.
Among 53 apps adopting transfer learning, we find that 71.7% of them can be successfully attacked.
arXiv Detail & Related papers (2022-04-23T14:01:59Z) - Self-supervised Transformer for Deepfake Detection [112.81127845409002]
Deepfake techniques in real-world scenarios require stronger generalization abilities of face forgery detectors.
Inspired by transfer learning, neural networks pre-trained on other large-scale face-related tasks may provide useful features for deepfake detection.
In this paper, we propose a self-supervised transformer based audio-visual contrastive learning method.
arXiv Detail & Related papers (2022-03-02T17:44:40Z) - Exploring System Performance of Continual Learning for Mobile and
Embedded Sensing Applications [19.334890205028568]
We conduct the first comprehensive empirical study that quantifies the performance of three predominant continual learning schemes.
We implement an end-to-end continual learning framework on edge devices.
We demonstrate for the first time that it is feasible and practical to run continual learning on-device with a limited memory budget.
arXiv Detail & Related papers (2021-10-25T22:06:26Z) - Federated Self-Training for Semi-Supervised Audio Recognition [0.23633885460047763]
In this work, we study the problem of semi-supervised learning of audio models via self-training.
We propose FedSTAR to exploit large-scale on-device unlabeled data to improve the generalization of audio recognition models.
arXiv Detail & Related papers (2021-07-14T17:40:10Z) - Visual Imitation Made Easy [102.36509665008732]
We present an alternate interface for imitation that simplifies the data collection process while allowing for easy transfer to robots.
We use commercially available reacher-grabber assistive tools both as a data collection device and as the robot's end-effector.
We experimentally evaluate on two challenging tasks: non-prehensile pushing and prehensile stacking, with 1000 diverse demonstrations for each task.
arXiv Detail & Related papers (2020-08-11T17:58:50Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.