Multi-Representation Diagrams for Pain Recognition: Integrating Various Electrodermal Activity Signals into a Single Image
- URL: http://arxiv.org/abs/2507.21881v4
- Date: Thu, 07 Aug 2025 16:23:02 GMT
- Title: Multi-Representation Diagrams for Pain Recognition: Integrating Various Electrodermal Activity Signals into a Single Image
- Authors: Stefanos Gkikas, Ioannis Kyprakis, Manolis Tsiknakis,
- Abstract summary: This study has been submitted to the textitSecond Multimodal Sensing Grand Challenge for Next-Gen Pain Assessment (AI4PAIN).
- Score: 0.8602553195689511
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pain is a multifaceted phenomenon that affects a substantial portion of the population. Reliable and consistent evaluation benefits those experiencing pain and underpins the development of effective and advanced management strategies. Automatic pain-assessment systems deliver continuous monitoring, inform clinical decision-making, and aim to reduce distress while preventing functional decline. By incorporating physiological signals, these systems provide objective, accurate insights into an individual's condition. This study has been submitted to the \textit{Second Multimodal Sensing Grand Challenge for Next-Gen Pain Assessment (AI4PAIN)}. The proposed method introduces a pipeline that leverages electrodermal activity signals as input modality. Multiple representations of the signal are created and visualized as waveforms, and they are jointly visualized within a single multi-representation diagram. Extensive experiments incorporating various processing and filtering techniques, along with multiple representation combinations, demonstrate the effectiveness of the proposed approach. It consistently yields comparable, and in several cases superior, results to traditional fusion methods, establishing it as a robust alternative for integrating different signal representations or modalities.
Related papers
- Efficient Pain Recognition via Respiration Signals: A Single Cross-Attention Transformer Multi-Window Fusion Pipeline [0.8602553195689511]
This study has been submitted to the textitSecond Multimodal Sensing Grand Challenge for Next-Gen Pain Assessment (AI4PAIN).
arXiv Detail & Related papers (2025-07-29T14:58:29Z) - PhysioWave: A Multi-Scale Wavelet-Transformer for Physiological Signal Representation [18.978031999678507]
A novel wavelet-based approach for physiological signal analysis is presented, aiming to capture multi-scale time-frequency features in various physiological signals.<n>Two large-scale pretrained models specific to EMG and ECG are introduced for the first time, achieving superior performance and setting new baselines in downstream tasks.<n>A unified multi-modal framework is constructed by integrating pretrained EEG model, where each modality is guided through its dedicated branch and fused via learnable weighted fusion.
arXiv Detail & Related papers (2025-06-12T05:11:41Z) - Active inference and deep generative modeling for cognitive ultrasound [20.383444113659476]
We show that US imaging systems can be recast as information-seeking agents that engage in reciprocal interactions with their anatomical environment.
Such agents autonomously adapt their transmit-receive sequences to fully personalize imaging and actively maximize information gain in-situ.
We then equip systems with a mechanism to actively reduce uncertainty and maximize diagnostic value across a sequence of experiments.
arXiv Detail & Related papers (2024-10-17T08:09:14Z) - Multi-task Neural Networks for Pain Intensity Estimation using Electrocardiogram and Demographic Factors [0.8602553195689511]
We elaborate electrocardiography signals revealing the existence of variations in pain perception among different demographic groups.
We introduce a novel multi-task neural network for automatic pain estimation utilizing the age and the gender information of each individual.
arXiv Detail & Related papers (2024-07-28T11:57:50Z) - Joint Multimodal Transformer for Emotion Recognition in the Wild [49.735299182004404]
Multimodal emotion recognition (MMER) systems typically outperform unimodal systems.
This paper proposes an MMER method that relies on a joint multimodal transformer (JMT) for fusion with key-based cross-attention.
arXiv Detail & Related papers (2024-03-15T17:23:38Z) - Real-Time Model-Based Quantitative Ultrasound and Radar [65.268245109828]
We propose a neural network based on the physical model of wave propagation, which defines the relationship between the received signals and physical properties.
Our network can reconstruct multiple physical properties in less than one second for complex and realistic scenarios.
arXiv Detail & Related papers (2024-02-16T09:09:16Z) - Optimizing Skin Lesion Classification via Multimodal Data and Auxiliary
Task Integration [54.76511683427566]
This research introduces a novel multimodal method for classifying skin lesions, integrating smartphone-captured images with essential clinical and demographic information.
A distinctive aspect of this method is the integration of an auxiliary task focused on super-resolution image prediction.
The experimental evaluations have been conducted using the PAD-UFES20 dataset, applying various deep-learning architectures.
arXiv Detail & Related papers (2024-02-16T05:16:20Z) - Benchmarking Joint Face Spoofing and Forgery Detection with Visual and
Physiological Cues [81.15465149555864]
We establish the first joint face spoofing and detection benchmark using both visual appearance and physiological r cues.
To enhance the r periodicity discrimination, we design a two-branch physiological network using both facial powerful rtemporal signal map and its continuous wavelet transformed counterpart as inputs.
arXiv Detail & Related papers (2022-08-10T15:41:48Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - Non-contact Pain Recognition from Video Sequences with Remote
Physiological Measurements Prediction [53.03469655641418]
We present a novel multi-task learning framework which encodes both appearance changes and physiological cues in a non-contact manner for pain recognition.
We establish the state-of-the-art performance of non-contact pain recognition on publicly available pain databases.
arXiv Detail & Related papers (2021-05-18T20:47:45Z) - Multimodal Gait Recognition for Neurodegenerative Diseases [38.06704951209703]
We propose a novel hybrid model to learn the gait differences between three neurodegenerative diseases.
A new correlative memory neural network architecture is designed for extracting temporal features.
Compared with several state-of-the-art techniques, our proposed framework shows more accurate classification results.
arXiv Detail & Related papers (2021-01-07T10:17:11Z) - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion [71.87627318863612]
We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
arXiv Detail & Related papers (2020-02-22T14:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.