Mobile Behavioral Biometrics for Passive Authentication
- URL: http://arxiv.org/abs/2203.07300v1
- Date: Mon, 14 Mar 2022 17:05:59 GMT
- Title: Mobile Behavioral Biometrics for Passive Authentication
- Authors: Giuseppe Stragapede, Ruben Vera-Rodriguez, Ruben Tolosana, Aythami
Morales, Alejandro Acien, Gael Le Lan
- Abstract summary: This work carries out a comparative analysis of unimodal and multimodal behavioral biometric traits.
Experiments are performed over HuMIdb, one of the largest and most comprehensive freely available mobile user interaction databases.
In our experiments, the most discriminative background sensor is the magnetometer, whereas among touch tasks the best results are achieved with keystroke.
- Score: 65.94403066225384
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Current mobile user authentication systems based on PIN codes, fingerprint,
and face recognition have several shortcomings. Such limitations have been
addressed in the literature by exploring the feasibility of passive
authentication on mobile devices through behavioral biometrics. In this line of
research, this work carries out a comparative analysis of unimodal and
multimodal behavioral biometric traits acquired while the subjects perform
different activities on the phone such as typing, scrolling, drawing a number,
and tapping on the screen, considering the touchscreen and the simultaneous
background sensor data (accelerometer, gravity sensor, gyroscope, linear
accelerometer, and magnetometer). Our experiments are performed over HuMIdb,
one of the largest and most comprehensive freely available mobile user
interaction databases to date. A separate Recurrent Neural Network (RNN) with
triplet loss is implemented for each single modality. Then, the weighted fusion
of the different modalities is carried out at score level. In our experiments,
the most discriminative background sensor is the magnetometer, whereas among
touch tasks the best results are achieved with keystroke in a fixed-text
scenario. In all cases, the fusion of modalities is very beneficial, leading to
Equal Error Rates (EER) ranging from 4% to 9% depending on the modality
combination in a 3-second interval.
Related papers
- Your Identity is Your Behavior -- Continuous User Authentication based
on Machine Learning and Touch Dynamics [0.0]
This research used a dataset of touch dynamics collected from 40 subjects using the LG V30+.
The participants played four mobile games, Diep.io, Slither, and Minecraft, for 10 minutes each game.
The results of the research showed that all three algorithms were able to effectively classify users based on their individual touch dynamics.
arXiv Detail & Related papers (2023-04-24T13:45:25Z) - Multi-Channel Time-Series Person and Soft-Biometric Identification [65.83256210066787]
This work investigates person and soft-biometrics identification from recordings of humans performing different activities using deep architectures.
We evaluate the method on four datasets of multi-channel time-series human activity recognition (HAR)
Soft-biometric based attribute representation shows promising results and emphasis the necessity of larger datasets.
arXiv Detail & Related papers (2023-04-04T07:24:51Z) - BehavePassDB: Benchmarking Mobile Behavioral Biometrics [12.691633481373927]
We present a new database, BehavePassDB, structured into separate acquisition sessions and tasks.
We propose and evaluate a system based on Long-Short Term Memory (LSTM) architecture with triplet loss and modality fusion at score level.
arXiv Detail & Related papers (2022-06-06T11:21:15Z) - A Wireless-Vision Dataset for Privacy Preserving Human Activity
Recognition [53.41825941088989]
A new WiFi-based and video-based neural network (WiNN) is proposed to improve the robustness of activity recognition.
Our results show that WiVi data set satisfies the primary demand and all three branches in the proposed pipeline keep more than $80%$ of activity recognition accuracy.
arXiv Detail & Related papers (2022-05-24T10:49:11Z) - UMSNet: An Universal Multi-sensor Network for Human Activity Recognition [10.952666953066542]
This paper proposes a universal multi-sensor network (UMSNet) for human activity recognition.
In particular, we propose a new lightweight sensor residual block (called LSR block), which improves the performance.
Our framework has a clear structure and can be directly applied to various types of multi-modal Time Series Classification tasks.
arXiv Detail & Related papers (2022-05-24T03:29:54Z) - Hold On and Swipe: A Touch-Movement Based Continuous Authentication
Schema based on Machine Learning [0.0]
This study aims to contribute to this innovative research by evaluating the performance of a multimodal behavioral biometric based user authentication scheme.
This study uses a fusion of two popular publicly available datasets the Hand Movement Orientation and Grasp dataset and the BioIdent dataset.
This study evaluates our model performance using three common machine learning algorithms which are Random Forest Support Vector Machine and K-Nearest Neighbor reaching accuracy rates as high as 82%.
arXiv Detail & Related papers (2022-01-21T06:51:46Z) - TapNet: The Design, Training, Implementation, and Applications of a
Multi-Task Learning CNN for Off-Screen Mobile Input [75.05709030478073]
We present the design, training, implementation and applications of TapNet, a multi-task network that detects tapping on the smartphone.
TapNet can jointly learn from data across devices and simultaneously recognize multiple tap properties, including tap direction and tap location.
arXiv Detail & Related papers (2021-02-18T00:45:41Z) - Moving Object Classification with a Sub-6 GHz Massive MIMO Array using
Real Data [64.48836187884325]
Classification between different activities in an indoor environment using wireless signals is an emerging technology for various applications.
In this paper, we analyze classification of moving objects by employing machine learning on real data from a massive multi-input-multi-output (MIMO) system in an indoor environment.
arXiv Detail & Related papers (2021-02-09T15:48:35Z) - SensiX: A Platform for Collaborative Machine Learning on the Edge [69.1412199244903]
We present SensiX, a personal edge platform that stays between sensor data and sensing models.
We demonstrate its efficacy in developing motion and audio-based multi-device sensing systems.
Our evaluation shows that SensiX offers a 7-13% increase in overall accuracy and up to 30% increase across different environment dynamics at the expense of 3mW power overhead.
arXiv Detail & Related papers (2020-12-04T23:06:56Z) - End-to-end User Recognition using Touchscreen Biometrics [11.394909061094463]
The goal was to create an end-to-end system that can transparently identify users using raw data from mobile devices.
In the proposed system data from the touchscreen goes directly to the input of a deep neural network, which is able to decide on the identity of the user.
arXiv Detail & Related papers (2020-06-09T16:38:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.