Transfer Learning Based Diagnosis and Analysis of Lung Sound Aberrations
- URL: http://arxiv.org/abs/2303.08362v1
- Date: Wed, 15 Mar 2023 04:46:57 GMT
- Title: Transfer Learning Based Diagnosis and Analysis of Lung Sound Aberrations
- Authors: Hafsa Gulzar, Jiyun Li, Arslan Manzoor, Sadaf Rehmat, Usman Amjad and
Hadiqa Jalil Khan
- Abstract summary: This work attempts to develop a non-invasive technique for identifying respiratory sounds acquired by a stethoscope and voice recording software.
A visual representation of each audio sample is constructed, allowing resource identification for classification using methods like those used to effectively describe visuals.
Respiratory Sound Database obtained cutting-edge results, including accuracy of 95%, precision of 88%, recall score of 86%, and F1 score of 81%.
- Score: 0.35232085374661276
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the development of computer -systems that can collect and analyze
enormous volumes of data, the medical profession is establishing several
non-invasive tools. This work attempts to develop a non-invasive technique for
identifying respiratory sounds acquired by a stethoscope and voice recording
software via machine learning techniques. This study suggests a trained and
proven CNN-based approach for categorizing respiratory sounds. A visual
representation of each audio sample is constructed, allowing resource
identification for classification using methods like those used to effectively
describe visuals. We used a technique called Mel Frequency Cepstral
Coefficients (MFCCs). Here, features are retrieved and categorized via VGG16
(transfer learning) and prediction is accomplished using 5-fold
cross-validation. Employing various data splitting techniques, Respiratory
Sound Database obtained cutting-edge results, including accuracy of 95%,
precision of 88%, recall score of 86%, and F1 score of 81%. The ICBHI dataset
is used to train and test the model.
Related papers
- Stethoscope-guided Supervised Contrastive Learning for Cross-domain
Adaptation on Respiratory Sound Classification [1.690115983364313]
We introduce cross-domain adaptation techniques, which transfer the knowledge from a source domain to a distinct target domain.
In particular, by considering different stethoscope types as individual domains, we propose a novel stethoscope-guided supervised contrastive learning approach.
The experimental results on the ICBHI dataset demonstrate that the proposed methods are effective in reducing the domain dependency and achieving the ICBHI Score of 61.71%, which is a significant improvement of 2.16% over the baseline.
arXiv Detail & Related papers (2023-12-15T08:34:31Z) - Respiratory Disease Classification and Biometric Analysis Using Biosignals from Digital Stethoscopes [3.2458203725405976]
This work presents a novel approach leveraging digital stethoscope technology for automatic respiratory disease classification and biometric analysis.
By leveraging one of the largest publicly available medical database of respiratory sounds, we train machine learning models to classify various respiratory health conditions.
Our approach achieves high accuracy in both binary classification (89% balanced accuracy for healthy vs. diseased) and multi-class classification (72% balanced accuracy for specific diseases like pneumonia and COPD)
arXiv Detail & Related papers (2023-09-12T23:54:00Z) - COVID-19 Detection System: A Comparative Analysis of System Performance Based on Acoustic Features of Cough Audio Signals [0.6963971634605796]
This research aims to explore various acoustic features that enhance the performance of machine learning (ML) models in detecting COVID-19 from cough signals.
It investigates the efficacy of three feature extraction techniques, including Mel Frequency Cepstral Coefficients (MFCC), Chroma, and Spectral Contrast features, when applied to two machine learning algorithms, Support Vector Machine (SVM) and Multilayer Perceptron (MLP)
The proposed system provides a practical solution and demonstrates state-of-the-art classification performance, with an AUC of 0.843 on the COUGHVID dataset and 0.953 on the Virufy
arXiv Detail & Related papers (2023-09-08T08:33:24Z) - Exploring traditional machine learning for identification of
pathological auscultations [0.39577682622066246]
Digital 6-channel auscultations of 45 patients were used in various machine learning scenarios.
The aim was to distinguish between normal and anomalous pulmonary sounds.
Supervised models showed a consistent advantage over unsupervised ones.
arXiv Detail & Related papers (2022-09-01T18:03:21Z) - Deep Feature Learning for Medical Acoustics [78.56998585396421]
The purpose of this paper is to compare different learnables in medical acoustics tasks.
A framework has been implemented to classify human respiratory sounds and heartbeats in two categories, i.e. healthy or affected by pathologies.
arXiv Detail & Related papers (2022-08-05T10:39:37Z) - Robust Medical Image Classification from Noisy Labeled Data with Global
and Local Representation Guided Co-training [73.60883490436956]
We propose a novel collaborative training paradigm with global and local representation learning for robust medical image classification.
We employ the self-ensemble model with a noisy label filter to efficiently select the clean and noisy samples.
We also design a novel global and local representation learning scheme to implicitly regularize the networks to utilize noisy samples.
arXiv Detail & Related papers (2022-05-10T07:50:08Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - Responding to Challenge Call of Machine Learning Model Development in
Diagnosing Respiratory Disease Sounds [0.0]
A machine learning model was developed for automatically detecting respiratory system sounds such as sneezing and coughing in disease diagnosis.
Three different classification techniques were considered to perform successful respiratory sound classification in the dataset containing more than 3800 different sounds.
In an attempt to classify coughing and sneezing sounds from other sounds, SVM with RBF kernels was achieved with 83% success.
arXiv Detail & Related papers (2021-11-29T07:18:36Z) - Detecting COVID-19 from Breathing and Coughing Sounds using Deep Neural
Networks [68.8204255655161]
We adapt an ensemble of Convolutional Neural Networks to classify if a speaker is infected with COVID-19 or not.
Ultimately, it achieves an Unweighted Average Recall (UAR) of 74.9%, or an Area Under ROC Curve (AUC) of 80.7% by ensembling neural networks.
arXiv Detail & Related papers (2020-12-29T01:14:17Z) - Chest x-ray automated triage: a semiologic approach designed for
clinical implementation, exploiting different types of labels through a
combination of four Deep Learning architectures [83.48996461770017]
This work presents a Deep Learning method based on the late fusion of different convolutional architectures.
We built four training datasets combining images from public chest x-ray datasets and our institutional archive.
We trained four different Deep Learning architectures and combined their outputs with a late fusion strategy, obtaining a unified tool.
arXiv Detail & Related papers (2020-12-23T14:38:35Z) - Self-Training with Improved Regularization for Sample-Efficient Chest
X-Ray Classification [80.00316465793702]
We present a deep learning framework that enables robust modeling in challenging scenarios.
Our results show that using 85% lesser labeled data, we can build predictive models that match the performance of classifiers trained in a large-scale data setting.
arXiv Detail & Related papers (2020-05-03T02:36:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.