HCT: Hybrid Convnet-Transformer for Parkinson's disease detection and
severity prediction from gait
- URL: http://arxiv.org/abs/2310.17078v1
- Date: Thu, 26 Oct 2023 00:43:15 GMT
- Title: HCT: Hybrid Convnet-Transformer for Parkinson's disease detection and
severity prediction from gait
- Authors: Safwen Naimi, Wassim Bouachir, Guillaume-Alexandre Bilodeau
- Abstract summary: We propose a novel deep learning method to detect and stage Parkinson's disease (PD) from gait data.
Our hybrid architecture exploits the strengths of both Convolutional Neural Networks (ConvNets) and Transformers to accurately detect PD and determine the severity stage.
Our method achieves superior performance when compared to other state-of-the-art methods, with a PD detection accuracy of 97% and a severity staging accuracy of 87%.
- Score: 7.213855322671065
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a novel deep learning method based on a new Hybrid
ConvNet-Transformer architecture to detect and stage Parkinson's disease (PD)
from gait data. We adopt a two-step approach by dividing the problem into two
sub-problems. Our Hybrid ConvNet-Transformer model first distinguishes healthy
versus parkinsonian patients. If the patient is parkinsonian, a multi-class
Hybrid ConvNet-Transformer model determines the Hoehn and Yahr (H&Y) score to
assess the PD severity stage. Our hybrid architecture exploits the strengths of
both Convolutional Neural Networks (ConvNets) and Transformers to accurately
detect PD and determine the severity stage. In particular, we take advantage of
ConvNets to capture local patterns and correlations in the data, while we
exploit Transformers for handling long-term dependencies in the input signal.
We show that our hybrid method achieves superior performance when compared to
other state-of-the-art methods, with a PD detection accuracy of 97% and a
severity staging accuracy of 87%. Our source code is available at:
https://github.com/SafwenNaimi
Related papers
- Pruning By Explaining Revisited: Optimizing Attribution Methods to Prune CNNs and Transformers [14.756988176469365]
An effective approach to reduce computational requirements and increase efficiency is to prune unnecessary components of Deep Neural Networks.
Previous work has shown that attribution methods from the field of eXplainable AI serve as effective means to extract and prune the least relevant network components in a few-shot fashion.
arXiv Detail & Related papers (2024-08-22T17:35:18Z) - Prototype Learning Guided Hybrid Network for Breast Tumor Segmentation in DCE-MRI [58.809276442508256]
We propose a hybrid network via the combination of convolution neural network (CNN) and transformer layers.
The experimental results on private and public DCE-MRI datasets demonstrate that the proposed hybrid network superior performance than the state-of-the-art methods.
arXiv Detail & Related papers (2024-08-11T15:46:00Z) - 1D-Convolutional transformer for Parkinson disease diagnosis from gait [7.213855322671065]
This paper presents an efficient deep neural network model for diagnosing Parkinson's disease from gait.
We introduce a hybrid ConvNetTransform-er architecture to accurately diagnose the disease by detecting the severity stage.
Our experimental results show that our approach is effective for detecting the different stages of Parkinson's disease from gait data.
arXiv Detail & Related papers (2023-11-06T15:17:17Z) - Optimizing Vision Transformers for Medical Image Segmentation and
Few-Shot Domain Adaptation [11.690799827071606]
We propose Convolutional Swin-Unet (CS-Unet) transformer blocks and optimise their settings with relation to patch embedding, projection, the feed-forward network, up sampling and skip connections.
CS-Unet can be trained from scratch and inherits the superiority of convolutions in each feature process phase.
Experiments show that CS-Unet without pre-training surpasses other state-of-the-art counterparts by large margins on two medical CT and MRI datasets with fewer parameters.
arXiv Detail & Related papers (2022-10-14T19:18:52Z) - Characterization of anomalous diffusion through convolutional
transformers [0.8984888893275713]
We propose a new transformer based neural network architecture for the characterization of anomalous diffusion.
Our new architecture, the Convolutional Transformer (ConvTransformer), uses a bi-layered convolutional neural network to extract features from our diffusive trajectories.
We show that the ConvTransformer is able to outperform the previous state of the art at determining the underlying diffusive regime in short trajectories.
arXiv Detail & Related papers (2022-10-10T18:53:13Z) - Transformers for 1D Signals in Parkinson's Disease Detection from Gait [11.759564521969379]
This paper focuses on the detection of Parkinson's disease based on the analysis of a patient's gait.
We develop a novel method for this problem based on an automatic features extraction via Transformers.
Our model outperforms the current state-of-the-art algorithm with 95.2% accuracy in distinguishing a Parkinsonian patient from a healthy one.
arXiv Detail & Related papers (2022-04-01T13:30:52Z) - nnFormer: Interleaved Transformer for Volumetric Segmentation [50.10441845967601]
We introduce nnFormer, a powerful segmentation model with an interleaved architecture based on empirical combination of self-attention and convolution.
nnFormer achieves tremendous improvements over previous transformer-based methods on two commonly used datasets Synapse and ACDC.
arXiv Detail & Related papers (2021-09-07T17:08:24Z) - Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation [63.46694853953092]
Swin-Unet is an Unet-like pure Transformer for medical image segmentation.
tokenized image patches are fed into the Transformer-based U-shaped decoder-Decoder architecture.
arXiv Detail & Related papers (2021-05-12T09:30:26Z) - End-to-End Trainable Multi-Instance Pose Estimation with Transformers [68.93512627479197]
We propose a new end-to-end trainable approach for multi-instance pose estimation by combining a convolutional neural network with a transformer.
Inspired by recent work on end-to-end trainable object detection with transformers, we use a transformer encoder-decoder architecture together with a bipartite matching scheme to directly regress the pose of all individuals in a given image.
Our model, called POse Estimation Transformer (POET), is trained using a novel set-based global loss that consists of a keypoint loss, a keypoint visibility loss, a center loss and a class loss.
arXiv Detail & Related papers (2021-03-22T18:19:22Z) - Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction
without Convolutions [103.03973037619532]
This work investigates a simple backbone network useful for many dense prediction tasks without convolutions.
Unlike the recently-proposed Transformer model (e.g., ViT) that is specially designed for image classification, we propose Pyramid Vision Transformer(PVT)
PVT can be not only trained on dense partitions of the image to achieve high output resolution, which is important for dense predictions.
arXiv Detail & Related papers (2021-02-24T08:33:55Z) - Conformer: Convolution-augmented Transformer for Speech Recognition [60.119604551507805]
Recently Transformer and Convolution neural network (CNN) based models have shown promising results in Automatic Speech Recognition (ASR)
We propose the convolution-augmented transformer for speech recognition, named Conformer.
On the widely used LibriSpeech benchmark, our model achieves WER of 2.1%/4.3% without using a language model and 1.9%/3.9% with an external language model on test/testother.
arXiv Detail & Related papers (2020-05-16T20:56:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.