Manikin-Recorded Cardiopulmonary Sounds Dataset Using Digital Stethoscope
- URL: http://arxiv.org/abs/2410.03280v1
- Date: Fri, 4 Oct 2024 09:53:16 GMT
- Title: Manikin-Recorded Cardiopulmonary Sounds Dataset Using Digital Stethoscope
- Authors: Yasaman Torabi, Shahram Shirani, James P. Reilly,
- Abstract summary: Heart and lung sounds are crucial for healthcare monitoring.
Recent improvements in stethoscope technology have made it possible to capture patient sounds with enhanced precision.
To our knowledge, this is the first dataset to offer both separate and mixed cardiorespiratory sounds.
- Score: 3.956979400783713
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Heart and lung sounds are crucial for healthcare monitoring. Recent improvements in stethoscope technology have made it possible to capture patient sounds with enhanced precision. In this dataset, we used a digital stethoscope to capture both heart and lung sounds, including individual and mixed recordings. To our knowledge, this is the first dataset to offer both separate and mixed cardiorespiratory sounds. The recordings were collected from a clinical manikin, a patient simulator designed to replicate human physiological conditions, generating clean heart and lung sounds at different body locations. This dataset includes both normal sounds and various abnormalities (i.e., murmur, atrial fibrillation, tachycardia, atrioventricular block, third and fourth heart sound, wheezing, crackles, rhonchi, pleural rub, and gurgling sounds). The dataset includes audio recordings of chest examinations performed at different anatomical locations, as determined by specialist nurses. Each recording has been enhanced using frequency filters to highlight specific sound types. This dataset is useful for applications in artificial intelligence, such as automated cardiopulmonary disease detection, sound classification, unsupervised separation techniques, and deep learning algorithms related to audio signal processing.
Related papers
- Exploring Finetuned Audio-LLM on Heart Murmur Features [13.529024158003233]
Large language models (LLMs) for audio have excelled in recognizing and analyzing human speech, music, and environmental sounds.
In this study, we focus on diagnosing cardiovascular diseases using phonocardiograms, i.e., heart sounds.
arXiv Detail & Related papers (2025-01-23T17:57:18Z) - Ultrasound Lung Aeration Map via Physics-Aware Neural Operators [78.6077820217471]
Lung ultrasound is a growing modality in clinics for diagnosing acute and chronic lung diseases.
It is complicated by complex reverberations from the pleural interface caused by the inability of ultrasound to penetrate air.
We propose LUNA, an AI model that directly reconstructs lung aeration maps from RF data.
arXiv Detail & Related papers (2025-01-02T09:24:34Z) - BUET Multi-disease Heart Sound Dataset: A Comprehensive Auscultation Dataset for Developing Computer-Aided Diagnostic Systems [1.7448183054840163]
The BUET Multi-disease Heart Sound dataset is a comprehensive and meticulously curated collection of heart sound recordings.
The dataset represents a broad spectrum of valvular heart diseases, with a focus on diagnostically challenging cases.
Its innovative multi-label annotation system captures a diverse range of diseases and unique disease states.
arXiv Detail & Related papers (2024-09-01T13:55:04Z) - Cardiac Copilot: Automatic Probe Guidance for Echocardiography with World Model [66.35766658717205]
There is a severe shortage of experienced cardiac sonographers, due to the heart's complex structure and significant operational challenges.
We present a Cardiac Copilot system capable of providing real-time probe movement guidance.
The core innovation lies in proposing a data-driven world model, named Cardiac Dreamer, for representing cardiac spatial structures.
We train our model with real-world ultrasound data and corresponding probe motion from 110 routine clinical scans with 151K sample pairs by three certified sonographers.
arXiv Detail & Related papers (2024-06-19T02:42:29Z) - Heart Sound Segmentation Using Deep Learning Techniques [0.0]
This paper presents a novel approach for heart sound segmentation and classification into S1 (LUB) and S2 (DUB) sounds.
We employ FFT-based filtering, dynamic programming for event detection, and a Siamese network for robust classification.
Our method demonstrates superior performance on the PASCAL heart sound dataset compared to existing approaches.
arXiv Detail & Related papers (2024-06-09T05:30:05Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - The CirCor DigiScope Dataset: From Murmur Detection to Murmur
Classification [5.879085008496386]
A total of 5282 recordings have been collected from the four main auscultation locations of 1568 patients.
For the first time, each cardiac murmur has been manually annotated by an expert annotator according to its timing, shape, pitch, grading and quality.
arXiv Detail & Related papers (2021-08-02T12:30:40Z) - Improving Medical Image Classification with Label Noise Using
Dual-uncertainty Estimation [72.0276067144762]
We discuss and define the two common types of label noise in medical images.
We propose an uncertainty estimation-based framework to handle these two label noise amid the medical image classification task.
arXiv Detail & Related papers (2021-02-28T14:56:45Z) - Noise-Resilient Automatic Interpretation of Holter ECG Recordings [67.59562181136491]
We present a three-stage process for analysing Holter recordings with robustness to noisy signal.
First stage is a segmentation neural network (NN) with gradientdecoder architecture which detects positions of heartbeats.
Second stage is a classification NN which will classify heartbeats as wide or narrow.
Third stage is a boosting decision trees (GBDT) on top of NN features that incorporates patient-wise features.
arXiv Detail & Related papers (2020-11-17T16:15:49Z) - Neural collaborative filtering for unsupervised mitral valve
segmentation in echocardiography [60.08918310097638]
We propose an automated and unsupervised method for the mitral valve segmentation based on a low dimensional embedding of the echocardiography videos.
The method is evaluated in a collection of echocardiography videos of patients with a variety of mitral valve diseases and on an independent test cohort.
It outperforms state-of-the-art emphunsupervised and emphsupervised methods on low-quality videos or in the case of sparse annotation.
arXiv Detail & Related papers (2020-08-13T12:53:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.