Multimodal signal fusion for stress detection using deep neural networks: a novel approach for converting 1D signals to unified 2D images
- URL: http://arxiv.org/abs/2509.13636v1
- Date: Wed, 17 Sep 2025 02:18:51 GMT
- Title: Multimodal signal fusion for stress detection using deep neural networks: a novel approach for converting 1D signals to unified 2D images
- Authors: Yasin Hasanpoor, Bahram Tarvirdizadeh, Khalil Alipour, Mohammad Ghamari,
- Abstract summary: This study introduces a novel method that transforms multimodal physiological signalsphotoplethysmography, galvanic skin response (GSR), and acceleration (ACC) into 2D image matrices.<n>Unlike traditional approaches that process these signals separately or rely on fixed encodings, our technique fuses them into structured image representations.<n>This image based transformation not only improves interpretability but also serves as a robust form of data augmentation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This study introduces a novel method that transforms multimodal physiological signalsphotoplethysmography (PPG), galvanic skin response (GSR), and acceleration (ACC) into 2D image matrices to enhance stress detection using convolutional neural networks (CNNs). Unlike traditional approaches that process these signals separately or rely on fixed encodings, our technique fuses them into structured image representations that enable CNNs to capture temporal and cross signal dependencies more effectively. This image based transformation not only improves interpretability but also serves as a robust form of data augmentation. To further enhance generalization and model robustness, we systematically reorganize the fused signals into multiple formats, combining them in a multi stage training pipeline. This approach significantly boosts classification performance. While demonstrated here in the context of stress detection, the proposed method is broadly applicable to any domain involving multimodal physiological signals, paving the way for more accurate, personalized, and real time health monitoring through wearable technologies.
Related papers
- Toward Relative Positional Encoding in Spiking Transformers [52.62008099390541]
Spiking neural networks (SNNs) are bio-inspired networks that mimic how neurons in the brain communicate through discrete spikes.<n>We introduce several strategies to approximate relative positional encoding (RPE) in spiking Transformers.
arXiv Detail & Related papers (2025-01-28T06:42:37Z) - Deep Learning Based Speckle Filtering for Polarimetric SAR Images. Application to Sentinel-1 [51.404644401997736]
We propose a complete framework to remove speckle in polarimetric SAR images using a convolutional neural network.
Experiments show that the proposed approach offers exceptional results in both speckle reduction and resolution preservation.
arXiv Detail & Related papers (2024-08-28T10:07:17Z) - MindFormer: Semantic Alignment of Multi-Subject fMRI for Brain Decoding [50.55024115943266]
We introduce a novel semantic alignment method of multi-subject fMRI signals using so-called MindFormer.
This model is specifically designed to generate fMRI-conditioned feature vectors that can be used for conditioning Stable Diffusion model for fMRI- to-image generation or large language model (LLM) for fMRI-to-text generation.
Our experimental results demonstrate that MindFormer generates semantically consistent images and text across different subjects.
arXiv Detail & Related papers (2024-05-28T00:36:25Z) - Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - Implicit Neural Networks with Fourier-Feature Inputs for Free-breathing
Cardiac MRI Reconstruction [21.261567937245808]
We propose a reconstruction approach based on representing the beating heart with an implicit neural network and fitting the network so that the representation of the heart is consistent with the measurements.
Our method achieves reconstruction quality on par with or slightly better than state-of-the-art untrained convolutional neural networks and superior image quality.
arXiv Detail & Related papers (2023-05-11T14:14:30Z) - fRegGAN with K-space Loss Regularization for Medical Image Translation [42.253647362909476]
Generative adversarial networks (GANs) have shown remarkable success in generating realistic images.
GANs tend to suffer from a frequency bias towards low frequencies, which can lead to the removal of important structures in the generated images.
We propose a novel frequency-aware image-to-image translation framework based on the supervised RegGAN approach, which we call fRegGAN.
arXiv Detail & Related papers (2023-03-28T12:49:10Z) - Unsupervised Domain Transfer with Conditional Invertible Neural Networks [83.90291882730925]
We propose a domain transfer approach based on conditional invertible neural networks (cINNs)
Our method inherently guarantees cycle consistency through its invertible architecture, and network training can efficiently be conducted with maximum likelihood.
Our method enables the generation of realistic spectral data and outperforms the state of the art on two downstream classification tasks.
arXiv Detail & Related papers (2023-03-17T18:00:27Z) - Segmentation-guided Domain Adaptation and Data Harmonization of
Multi-device Retinal Optical Coherence Tomography using Cycle-Consistent
Generative Adversarial Networks [2.968191199408213]
This paper proposes a segmentation-guided domain-adaptation method to adapt images from multiple devices into single image domain.
It avoids the time consumption of manual labelling for the upcoming new dataset and the re-training of the existing network.
arXiv Detail & Related papers (2022-08-31T05:06:00Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - Multimodal-Boost: Multimodal Medical Image Super-Resolution using
Multi-Attention Network with Wavelet Transform [5.416279158834623]
Loss of corresponding image resolution degrades the overall performance of medical image diagnosis.
Deep learning based single image super resolution (SISR) algorithms has revolutionized the overall diagnosis framework.
This work proposes generative adversarial network (GAN) with deep multi-attention modules to learn high-frequency information from low-frequency data.
arXiv Detail & Related papers (2021-10-22T10:13:46Z) - An Adaptive Sampling and Edge Detection Approach for Encoding Static
Images for Spiking Neural Networks [0.2519906683279152]
Spiking neural networks (SNNs) are considered to be the third generation of artificial neural networks.
We propose a method for encoding static images into temporal spike trains using edge detection and an adaptive signal sampling method.
arXiv Detail & Related papers (2021-10-19T19:31:52Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.