Transformer Encoder with Multiscale Deep Learning for Pain
Classification Using Physiological Signals
- URL: http://arxiv.org/abs/2303.06845v1
- Date: Mon, 13 Mar 2023 04:21:33 GMT
- Title: Transformer Encoder with Multiscale Deep Learning for Pain
Classification Using Physiological Signals
- Authors: Zhenyuan Lu, Burcu Ozek, Sagar Kamarthi
- Abstract summary: Pain is a subjective sensation-driven experience.
Traditional techniques for measuring pain intensity are susceptible to bias and unreliable in some instances.
We develop PainAttnNet, a novel transfomer-encoder deep-learning framework for classifying pain intensities with physiological signals as input.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pain is a serious worldwide health problem that affects a vast proportion of
the population. For efficient pain management and treatment, accurate
classification and evaluation of pain severity are necessary. However, this can
be challenging as pain is a subjective sensation-driven experience. Traditional
techniques for measuring pain intensity, e.g. self-report scales, are
susceptible to bias and unreliable in some instances. Consequently, there is a
need for more objective and automatic pain intensity assessment strategies. In
this research, we develop PainAttnNet (PAN), a novel transfomer-encoder
deep-learning framework for classifying pain intensities with physiological
signals as input. The proposed approach is comprised of three feature
extraction architectures: multiscale convolutional networks (MSCN), a
squeeze-and-excitation residual network (SEResNet), and a transformer encoder
block. On the basis of pain stimuli, MSCN extracts short- and long-window
information as well as sequential features. SEResNet highlights relevant
extracted features by mapping the interdependencies among features. The third
architecture employs a transformer encoder consisting of three temporal
convolutional networks (TCN) with three multi-head attention (MHA) layers to
extract temporal dependencies from the features. Using the publicly available
BioVid pain dataset, we test the proposed PainAttnNet model and demonstrate
that our outcomes outperform state-of-the-art models. These results confirm
that our approach can be utilized for automated classification of pain
intensity using physiological signals to improve pain management and treatment.
Related papers
- Faces of Experimental Pain: Transferability of Deep Learned Heat Pain Features to Electrical Pain [7.205834345343974]
In this study, we investigate whether deep learned feature representation for one type of experimentally induced pain can be transferred to another.
Challenge dataset contains data collected from 65 participants undergoing varying intensities of electrical pain.
In our proposed approach, we leverage an existing heat pain convolutional neural network (CNN) - trained on BioVid dataset - as a feature extractor.
arXiv Detail & Related papers (2024-06-17T17:51:54Z) - Enhancing Weakly Supervised 3D Medical Image Segmentation through
Probabilistic-aware Learning [52.249748801637196]
3D medical image segmentation is a challenging task with crucial implications for disease diagnosis and treatment planning.
Recent advances in deep learning have significantly enhanced fully supervised medical image segmentation.
We propose a novel probabilistic-aware weakly supervised learning pipeline, specifically designed for 3D medical imaging.
arXiv Detail & Related papers (2024-03-05T00:46:53Z) - Leveraging Frequency Domain Learning in 3D Vessel Segmentation [50.54833091336862]
In this study, we leverage Fourier domain learning as a substitute for multi-scale convolutional kernels in 3D hierarchical segmentation models.
We show that our novel network achieves remarkable dice performance (84.37% on ASACA500 and 80.32% on ImageCAS) in tubular vessel segmentation tasks.
arXiv Detail & Related papers (2024-01-11T19:07:58Z) - Pain Analysis using Adaptive Hierarchical Spatiotemporal Dynamic Imaging [16.146223377936035]
We introduce the Adaptive temporal Dynamic Image (AHDI) technique.
AHDI encodes deep changes in facial videos into singular RGB image, permitting application simpler 2D models for video representation.
Within this framework, we employ a residual network to derive generalized facial representations.
These representations are optimized for two tasks: estimating pain intensity and differentiating between genuine and simulated pain expressions.
arXiv Detail & Related papers (2023-12-12T01:23:05Z) - AttResDU-Net: Medical Image Segmentation Using Attention-based Residual
Double U-Net [0.0]
This paper proposes an attention-based residual Double U-Net architecture (AttResDU-Net) that improves on the existing medical image segmentation networks.
We conducted experiments on three datasets: CVC Clinic-DB, ISIC 2018, and the 2018 Data Science Bowl datasets and achieved Dice Coefficient scores of 94.35%, 91.68%, and 92.45% respectively.
arXiv Detail & Related papers (2023-06-25T14:28:08Z) - Pain level and pain-related behaviour classification using GRU-based
sparsely-connected RNNs [61.080598804629375]
People with chronic pain unconsciously adapt specific body movements to protect themselves from injury or additional pain.
Because there is no dedicated benchmark database to analyse this correlation, we considered one of the specific circumstances that potentially influence a person's biometrics during daily activities.
We proposed a sparsely-connected recurrent neural networks (s-RNNs) ensemble with the gated recurrent unit (GRU) that incorporates multiple autoencoders.
We conducted several experiments which indicate that the proposed method outperforms the state-of-the-art approaches in classifying both pain level and pain-related behaviour.
arXiv Detail & Related papers (2022-12-20T12:56:28Z) - Mental arithmetic task classification with convolutional neural network
based on spectral-temporal features from EEG [0.47248250311484113]
Deep neural networks (DNN) show significant advantages in computer vision applications.
We present here a shallow neural network that uses mainly two convolutional neural network layers, with relatively few parameters and fast to learn spectral-temporal features from EEG.
Experimental results showed that the shallow CNN model outperformed all the other models and achieved the highest classification accuracy of 90.68%.
arXiv Detail & Related papers (2022-09-26T02:15:22Z) - CNN-based fully automatic wrist cartilage volume quantification in MR
Image [55.41644538483948]
The U-net convolutional neural network with additional attention layers provides the best wrist cartilage segmentation performance.
The error of cartilage volume measurement should be assessed independently using a non-MRI method.
arXiv Detail & Related papers (2022-06-22T14:19:06Z) - Non-contact Pain Recognition from Video Sequences with Remote
Physiological Measurements Prediction [53.03469655641418]
We present a novel multi-task learning framework which encodes both appearance changes and physiological cues in a non-contact manner for pain recognition.
We establish the state-of-the-art performance of non-contact pain recognition on publicly available pain databases.
arXiv Detail & Related papers (2021-05-18T20:47:45Z) - DFENet: A Novel Dimension Fusion Edge Guided Network for Brain MRI
Segmentation [0.0]
We propose a novel Dimension Fusion Edge-guided network (DFENet) that can meet both of these requirements by fusing the features of 2D and 3D CNNs.
The proposed model is robust, accurate, superior to the existing methods, and can be relied upon for biomedical applications.
arXiv Detail & Related papers (2021-05-17T15:43:59Z) - Weakly-supervised Learning For Catheter Segmentation in 3D Frustum
Ultrasound [74.22397862400177]
We propose a novel Frustum ultrasound based catheter segmentation method.
The proposed method achieved the state-of-the-art performance with an efficiency of 0.25 second per volume.
arXiv Detail & Related papers (2020-10-19T13:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.