Exploring Transformers for Behavioural Biometrics: A Case Study in Gait
Recognition
- URL: http://arxiv.org/abs/2206.01441v1
- Date: Fri, 3 Jun 2022 08:08:40 GMT
- Title: Exploring Transformers for Behavioural Biometrics: A Case Study in Gait
Recognition
- Authors: Paula Delgado-Santos, Ruben Tolosana, Richard Guest, Farzin Deravi,
Ruben Vera-Rodriguez
- Abstract summary: This article intends to explore and propose novel gait biometric recognition systems based on Transformers.
Several state-of-the-art architectures (Vanilla, Informer, Autoformer, Block-Recurrent Transformer, and THAT) are considered in the experimental framework.
Experiments are carried out using the two popular public databases whuGAIT and OU-ISIR.
- Score: 0.7874708385247353
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Biometrics on mobile devices has attracted a lot of attention in recent years
as it is considered a user-friendly authentication method. This interest has
also been motivated by the success of Deep Learning (DL). Architectures based
on Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs)
have been established to be convenient for the task, improving the performance
and robustness in comparison to traditional machine learning techniques.
However, some aspects must still be revisited and improved. To the best of our
knowledge, this is the first article that intends to explore and propose novel
gait biometric recognition systems based on Transformers, which currently
obtain state-of-the-art performance in many applications. Several
state-of-the-art architectures (Vanilla, Informer, Autoformer, Block-Recurrent
Transformer, and THAT) are considered in the experimental framework. In
addition, new configurations of the Transformers are proposed to further
increase the performance. Experiments are carried out using the two popular
public databases whuGAIT and OU-ISIR. The results achieved prove the high
ability of the proposed Transformer, outperforming state-of-the-art CNN and RNN
architectures.
Related papers
- Volume-Preserving Transformers for Learning Time Series Data with Structure [0.0]
We develop a transformer-inspired neural network and use it to learn a dynamical system.
We change the activation function of the attention layer to imbue the transformer with structure-preserving properties.
This is shown to be of great advantage when applying the neural network to learning the trajectory of a rigid body.
arXiv Detail & Related papers (2023-12-18T13:09:55Z) - A Survey of Techniques for Optimizing Transformer Inference [3.6258657276072253]
Recent years have seen a phenomenal rise in performance and applications of transformer neural networks.
Transformer-based networks such as ChatGPT have impacted the lives of common men.
Researchers have proposed techniques to optimize transformer inference at all levels of abstraction.
arXiv Detail & Related papers (2023-07-16T08:50:50Z) - Exploring the Performance and Efficiency of Transformer Models for NLP
on Mobile Devices [3.809702129519641]
New deep neural network (DNN) architectures and approaches are emerging every few years, driving the field's advancement.
Transformers are a relatively new model family that has achieved new levels of accuracy across AI tasks, but poses significant computational challenges.
This work aims to make steps towards bridging this gap by examining the current state of Transformers' on-device execution.
arXiv Detail & Related papers (2023-06-20T10:15:01Z) - A Comprehensive Survey on Applications of Transformers for Deep Learning
Tasks [60.38369406877899]
Transformer is a deep neural network that employs a self-attention mechanism to comprehend the contextual relationships within sequential data.
transformer models excel in handling long dependencies between input sequence elements and enable parallel processing.
Our survey encompasses the identification of the top five application domains for transformer-based models.
arXiv Detail & Related papers (2023-06-11T23:13:51Z) - AFR-Net: Attention-Driven Fingerprint Recognition Network [47.87570819350573]
We improve initial studies on the use of vision transformers (ViT) for biometric recognition, including fingerprint recognition.
We propose a realignment strategy using local embeddings extracted from intermediate feature maps within the networks to refine the global embeddings in low certainty situations.
This strategy can be applied as a wrapper to any existing deep learning network (including attention-based, CNN-based, or both) to boost its performance.
arXiv Detail & Related papers (2022-11-25T05:10:39Z) - Demystify Transformers & Convolutions in Modern Image Deep Networks [82.32018252867277]
This paper aims to identify the real gains of popular convolution and attention operators through a detailed study.
We find that the key difference among these feature transformation modules, such as attention or convolution, lies in their spatial feature aggregation approach.
Our experiments on various tasks and an analysis of inductive bias show a significant performance boost due to advanced network-level and block-level designs.
arXiv Detail & Related papers (2022-11-10T18:59:43Z) - Mobile Keystroke Biometrics Using Transformers [11.562974686156196]
This paper focuses on improving keystroke biometric systems on the free-text scenario.
Deep learning methods have been proposed in the literature, outperforming traditional machine learning methods.
To the best of our knowledge, this is the first study that proposes keystroke biometric systems based on Transformers.
arXiv Detail & Related papers (2022-07-15T16:50:11Z) - Vision Transformer with Convolutions Architecture Search [72.70461709267497]
We propose an architecture search method-Vision Transformer with Convolutions Architecture Search (VTCAS)
The high-performance backbone network searched by VTCAS introduces the desirable features of convolutional neural networks into the Transformer architecture.
It enhances the robustness of the neural network for object recognition, especially in the low illumination indoor scene.
arXiv Detail & Related papers (2022-03-20T02:59:51Z) - Rich CNN-Transformer Feature Aggregation Networks for Super-Resolution [50.10987776141901]
Recent vision transformers along with self-attention have achieved promising results on various computer vision tasks.
We introduce an effective hybrid architecture for super-resolution (SR) tasks, which leverages local features from CNNs and long-range dependencies captured by transformers.
Our proposed method achieves state-of-the-art SR results on numerous benchmark datasets.
arXiv Detail & Related papers (2022-03-15T06:52:25Z) - Improving Sample Efficiency of Value Based Models Using Attention and
Vision Transformers [52.30336730712544]
We introduce a deep reinforcement learning architecture whose purpose is to increase sample efficiency without sacrificing performance.
We propose a visually attentive model that uses transformers to learn a self-attention mechanism on the feature maps of the state representation.
We demonstrate empirically that this architecture improves sample complexity for several Atari environments, while also achieving better performance in some of the games.
arXiv Detail & Related papers (2022-02-01T19:03:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.