Machine-learning for photoplethysmography analysis: Benchmarking feature, image, and signal-based approaches
- URL: http://arxiv.org/abs/2502.19949v1
- Date: Thu, 27 Feb 2025 10:17:16 GMT
- Title: Machine-learning for photoplethysmography analysis: Benchmarking feature, image, and signal-based approaches
- Authors: Mohammad Moulaeifard, Loic Coquelin, Mantas Rinkevičius, Andrius Sološenko, Oskar Pfeffer, Ciaran Bench, Nando Hegemann, Sara Vardanega, Manasi Nandi, Jordi Alastruey, Christian Heiss, Vaidotas Marozas, Andrew Thompson, Philip J. Aston, Peter H. Charlton, Nils Strodthoff,
- Abstract summary: Photoplethysmography is a widely used non-invasive physiological sensing technique, suitable for various clinical applications.<n>Machine learning methods are increasingly supported by machine learning methods, raising the question of the most appropriate input representation and model choice.<n>We address this gap in the research landscape by a comprehensive benchmarking study covering three kinds of input representations, interpretable features, image representations and raw waveforms.
- Score: 1.1011387049911827
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Photoplethysmography (PPG) is a widely used non-invasive physiological sensing technique, suitable for various clinical applications. Such clinical applications are increasingly supported by machine learning methods, raising the question of the most appropriate input representation and model choice. Comprehensive comparisons, in particular across different input representations, are scarce. We address this gap in the research landscape by a comprehensive benchmarking study covering three kinds of input representations, interpretable features, image representations and raw waveforms, across prototypical regression and classification use cases: blood pressure and atrial fibrillation prediction. In both cases, the best results are achieved by deep neural networks operating on raw time series as input representations. Within this model class, best results are achieved by modern convolutional neural networks (CNNs). but depending on the task setup, shallow CNNs are often also very competitive. We envision that these results will be insightful for researchers to guide their choice on machine learning tasks for PPG data, even beyond the use cases presented in this work.
Related papers
- Evaluating Pre-trained Convolutional Neural Networks and Foundation Models as Feature Extractors for Content-based Medical Image Retrieval [0.37478492878307323]
Content-based medical image retrieval (CBMIR) depends on image features, which can be extracted automatically or semi-automatically.
In this study, we used several pre-trained feature extractors from well-known pre-trained convolutional neural networks (CNNs) and pre-trained foundation models.
Our results show that, overall, for the 2D datasets, foundation models deliver superior performance by a large margin compared to CNNs.
Our findings confirm that while using larger image sizes (especially for 2D datasets) yields slightly better performance, competitive CBMIR performance can still be achieved even with smaller image
arXiv Detail & Related papers (2024-09-14T13:07:30Z) - Benchmarking Embedding Aggregation Methods in Computational Pathology: A Clinical Data Perspective [32.93871326428446]
Recent advances in artificial intelligence (AI) are revolutionizing medical imaging and computational pathology.<n>A constant challenge in the analysis of digital Whole Slide Images (WSIs) is the problem of aggregating tens of thousands of tile-level image embeddings to a slide-level representation.<n>This study conducts a benchmarking analysis of ten slide-level aggregation techniques across nine clinically relevant tasks.
arXiv Detail & Related papers (2024-07-10T17:00:57Z) - Machine learning based biomedical image processing for echocardiographic
images [0.0]
The proposed method uses K-Nearest Neighbor (KNN) algorithm for segmentation of medical images.
The trained neural network has been tested successfully on a group of echocardiographic images.
arXiv Detail & Related papers (2023-03-16T06:23:43Z) - Advancing 3D finger knuckle recognition via deep feature learning [51.871256510747465]
Contactless 3D finger knuckle patterns have emerged as an effective biometric identifier due to its discriminativeness, visibility from a distance, and convenience.
Recent research has developed a deep feature collaboration network which simultaneously incorporates intermediate features from deep neural networks with multiple scales.
This paper advances this approach by investigating the possibility of learning a discriminative feature vector with the least possible dimension for representing 3D finger knuckle images.
arXiv Detail & Related papers (2023-01-07T20:55:16Z) - Self-Supervised Endoscopic Image Key-Points Matching [1.3764085113103222]
This paper proposes a novel self-supervised approach for endoscopic image matching based on deep learning techniques.
Our method outperformed standard hand-crafted local feature descriptors in terms of precision and recall.
arXiv Detail & Related papers (2022-08-24T10:47:21Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - Medulloblastoma Tumor Classification using Deep Transfer Learning with
Multi-Scale EfficientNets [63.62764375279861]
We propose an end-to-end MB tumor classification and explore transfer learning with various input sizes and matching network dimensions.
Using a data set with 161 cases, we demonstrate that pre-trained EfficientNets with larger input resolutions lead to significant performance improvements.
arXiv Detail & Related papers (2021-09-10T13:07:11Z) - Self-Supervised Representation Learning using Visual Field Expansion on
Digital Pathology [7.568373895297608]
A key challenge in the analysis of such images is their size, which can run into the gigapixels.
We propose a novel generative framework that can learn powerful representations for such tiles by learning to plausibly expand their visual field.
Our model learns to generate different tissue types with fine details, while simultaneously learning powerful representations that can be used for different clinical endpoints.
arXiv Detail & Related papers (2021-09-07T19:20:01Z) - Colorectal Polyp Classification from White-light Colonoscopy Images via
Domain Alignment [57.419727894848485]
A computer-aided diagnosis system is required to assist accurate diagnosis from colonoscopy images.
Most previous studies at-tempt to develop models for polyp differentiation using Narrow-Band Imaging (NBI) or other enhanced images.
We propose a novel framework based on a teacher-student architecture for the accurate colorectal polyp classification.
arXiv Detail & Related papers (2021-08-05T09:31:46Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Domain Generalization for Medical Imaging Classification with
Linear-Dependency Regularization [59.5104563755095]
We introduce a simple but effective approach to improve the generalization capability of deep neural networks in the field of medical imaging classification.
Motivated by the observation that the domain variability of the medical images is to some extent compact, we propose to learn a representative feature space through variational encoding.
arXiv Detail & Related papers (2020-09-27T12:30:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.