Transformers for 1D Signals in Parkinson's Disease Detection from Gait
- URL: http://arxiv.org/abs/2204.00423v1
- Date: Fri, 1 Apr 2022 13:30:52 GMT
- Title: Transformers for 1D Signals in Parkinson's Disease Detection from Gait
- Authors: Duc Minh Dimitri Nguyen, Mehdi Miah, Guillaume-Alexandre Bilodeau,
Wassim Bouachir
- Abstract summary: This paper focuses on the detection of Parkinson's disease based on the analysis of a patient's gait.
We develop a novel method for this problem based on an automatic features extraction via Transformers.
Our model outperforms the current state-of-the-art algorithm with 95.2% accuracy in distinguishing a Parkinsonian patient from a healthy one.
- Score: 11.759564521969379
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper focuses on the detection of Parkinson's disease based on the
analysis of a patient's gait. The growing popularity and success of Transformer
networks in natural language processing and image recognition motivated us to
develop a novel method for this problem based on an automatic features
extraction via Transformers. The use of Transformers in 1D signal is not really
widespread yet, but we show in this paper that they are effective in extracting
relevant features from 1D signals. As Transformers require a lot of memory, we
decoupled temporal and spatial information to make the model smaller. Our
architecture used temporal Transformers, dimension reduction layers to reduce
the dimension of the data, a spatial Transformer, two fully connected layers
and an output layer for the final prediction. Our model outperforms the current
state-of-the-art algorithm with 95.2\% accuracy in distinguishing a
Parkinsonian patient from a healthy one on the Physionet dataset. A key
learning from this work is that Transformers allow for greater stability in
results. The source code and pre-trained models are released in
https://github.com/DucMinhDimitriNguyen/Transformers-for-1D-signals-in-Parkinson-s-disease-detection -from-gait.git
Related papers
- 1D-Convolutional transformer for Parkinson disease diagnosis from gait [7.213855322671065]
This paper presents an efficient deep neural network model for diagnosing Parkinson's disease from gait.
We introduce a hybrid ConvNetTransform-er architecture to accurately diagnose the disease by detecting the severity stage.
Our experimental results show that our approach is effective for detecting the different stages of Parkinson's disease from gait data.
arXiv Detail & Related papers (2023-11-06T15:17:17Z) - HCT: Hybrid Convnet-Transformer for Parkinson's disease detection and
severity prediction from gait [7.213855322671065]
We propose a novel deep learning method to detect and stage Parkinson's disease (PD) from gait data.
Our hybrid architecture exploits the strengths of both Convolutional Neural Networks (ConvNets) and Transformers to accurately detect PD and determine the severity stage.
Our method achieves superior performance when compared to other state-of-the-art methods, with a PD detection accuracy of 97% and a severity staging accuracy of 87%.
arXiv Detail & Related papers (2023-10-26T00:43:15Z) - The Lazy Neuron Phenomenon: On Emergence of Activation Sparsity in
Transformers [59.87030906486969]
This paper studies the curious phenomenon for machine learning models with Transformer architectures that their activation maps are sparse.
We show that sparsity is a prevalent phenomenon that occurs for both natural language processing and vision tasks.
We discuss how sparsity immediately implies a way to significantly reduce the FLOP count and improve efficiency for Transformers.
arXiv Detail & Related papers (2022-10-12T15:25:19Z) - Pix4Point: Image Pretrained Standard Transformers for 3D Point Cloud
Understanding [62.502694656615496]
We present Progressive Point Patch Embedding and present a new point cloud Transformer model namely PViT.
PViT shares the same backbone as Transformer but is shown to be less hungry for data, enabling Transformer to achieve performance comparable to the state-of-the-art.
We formulate a simple yet effective pipeline dubbed "Pix4Point" that allows harnessing Transformers pretrained in the image domain to enhance downstream point cloud understanding.
arXiv Detail & Related papers (2022-08-25T17:59:29Z) - Focused Decoding Enables 3D Anatomical Detection by Transformers [64.36530874341666]
We propose a novel Detection Transformer for 3D anatomical structure detection, dubbed Focused Decoder.
Focused Decoder leverages information from an anatomical region atlas to simultaneously deploy query anchors and restrict the cross-attention's field of view.
We evaluate our proposed approach on two publicly available CT datasets and demonstrate that Focused Decoder not only provides strong detection results and thus alleviates the need for a vast amount of annotated data but also exhibits exceptional and highly intuitive explainability of results via attention weights.
arXiv Detail & Related papers (2022-07-21T22:17:21Z) - The Fully Convolutional Transformer for Medical Image Segmentation [2.87898780282409]
We propose a novel transformer model, capable of segmenting medical images of varying modalities.
The Fully Convolutional Transformer (FCT) is the first fully convolutional Transformer model in medical imaging literature.
arXiv Detail & Related papers (2022-06-01T15:22:41Z) - Finetuning Pretrained Transformers into Variational Autoencoders [0.0]
Text variational autoencoders (VAEs) are notorious for posterior collapse.
Transformers have seen limited adoption as components of text VAEs.
We present a simple two-phase training scheme to convert a sequence-to-sequence Transformer into a VAE with just finetuning.
arXiv Detail & Related papers (2021-08-05T08:27:26Z) - Vision Transformer with Progressive Sampling [73.60630716500154]
We propose an iterative and progressive sampling strategy to locate discriminative regions.
When trained from scratch on ImageNet, PS-ViT performs 3.8% higher than the vanilla ViT in terms of top-1 accuracy.
arXiv Detail & Related papers (2021-08-03T18:04:31Z) - Transformer-Based Deep Image Matching for Generalizable Person
Re-identification [114.56752624945142]
We investigate the possibility of applying Transformers for image matching and metric learning given pairs of images.
We find that the Vision Transformer (ViT) and the vanilla Transformer with decoders are not adequate for image matching due to their lack of image-to-image attention.
We propose a new simplified decoder, which drops the full attention implementation with the softmax weighting, keeping only the query-key similarity.
arXiv Detail & Related papers (2021-05-30T05:38:33Z) - Spatiotemporal Transformer for Video-based Person Re-identification [102.58619642363958]
We show that, despite the strong learning ability, the vanilla Transformer suffers from an increased risk of over-fitting.
We propose a novel pipeline where the model is pre-trained on a set of synthesized video data and then transferred to the downstream domains.
The derived algorithm achieves significant accuracy gain on three popular video-based person re-identification benchmarks.
arXiv Detail & Related papers (2021-03-30T16:19:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.