Detecting abnormal heart sound using mobile phones and on-device IConNet
- URL: http://arxiv.org/abs/2412.03267v1
- Date: Wed, 04 Dec 2024 12:18:21 GMT
- Title: Detecting abnormal heart sound using mobile phones and on-device IConNet
- Authors: Linh Vu, Thu Tran,
- Abstract summary: We present a user-friendly solution for abnormal heart sound detection, utilizing mobile phones and a lightweight neural network optimized for on-device inference.<n>IConNet, an Interpretable Convolutional Neural Network, harnesses insights from audio signal processing, enhancing efficiency and providing transparency in neural pattern extraction from raw waveform signals.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Given the global prevalence of cardiovascular diseases, there is a pressing need for easily accessible early screening methods. Typically, this requires medical practitioners to investigate heart auscultations for irregular sounds, followed by echocardiography and electrocardiography tests. To democratize early diagnosis, we present a user-friendly solution for abnormal heart sound detection, utilizing mobile phones and a lightweight neural network optimized for on-device inference. Unlike previous approaches reliant on specialized stethoscopes, our method directly analyzes audio recordings, facilitated by a novel architecture known as IConNet. IConNet, an Interpretable Convolutional Neural Network, harnesses insights from audio signal processing, enhancing efficiency and providing transparency in neural pattern extraction from raw waveform signals. This is a significant step towards trustworthy AI in healthcare, aiding in remote health monitoring efforts.
Related papers
- AI- Enhanced Stethoscope in Remote Diagnostics for Cardiopulmonary Diseases [0.0]
Our study introduces an innovative yet efficient model which integrates AI for diagnosing lung and heart conditions concurrently using the auscultation sounds.<n>Unlike the already high-priced digital stethoscope, our proposed model has been particularly designed to deploy on low-cost embedded devices.<n>Our proposed model incorporates MFCC feature extraction and engineering techniques to ensure that the signal is well analyzed for accurate diagnostics.
arXiv Detail & Related papers (2025-05-18T12:59:15Z) - EchoWorld: Learning Motion-Aware World Models for Echocardiography Probe Guidance [79.66329903007869]
We present EchoWorld, a motion-aware world modeling framework for probe guidance.
It encodes anatomical knowledge and motion-induced visual dynamics.
It is trained on more than one million ultrasound images from over 200 routine scans.
arXiv Detail & Related papers (2025-04-17T16:19:05Z) - Sequence-aware Pre-training for Echocardiography Probe Movement Guidance [71.79421124144145]
We introduce a novel probe movement guidance algorithm that has the potential to be applied in guiding robotic systems or novices with probe pose adjustment for high-quality standard plane image acquisition.<n>Our approach learns personalized three-dimensional cardiac structural features by predicting the masked-out image features and probe movement actions in a scanning sequence.
arXiv Detail & Related papers (2024-08-27T12:55:54Z) - ECG Arrhythmia Detection Using Disease-specific Attention-based Deep Learning Model [0.0]
We propose a disease-specific attention-based deep learning model (DANet) for arrhythmia detection from short ECG recordings.
The novel idea is to introduce a soft-coding or hard-coding waveform enhanced module into existing deep neural networks.
For the soft-coding DANet, we also develop a learning framework combining self-supervised pre-training with two-stage supervised training.
arXiv Detail & Related papers (2024-07-25T13:27:10Z) - Deciphering Heartbeat Signatures: A Vision Transformer Approach to Explainable Atrial Fibrillation Detection from ECG Signals [4.056982620027252]
We develop a vision transformer approach to identify atrial fibrillation based on single-lead ECG data.
A residual network (ResNet) approach is also developed for comparison with the vision transformer approach.
arXiv Detail & Related papers (2024-02-12T11:04:08Z) - Show from Tell: Audio-Visual Modelling in Clinical Settings [58.88175583465277]
We consider audio-visual modelling in a clinical setting, providing a solution to learn medical representations without human expert annotation.
A simple yet effective multi-modal self-supervised learning framework is proposed for this purpose.
The proposed approach is able to localise anatomical regions of interest during ultrasound imaging, with only speech audio as a reference.
arXiv Detail & Related papers (2023-10-25T08:55:48Z) - HEAR4Health: A blueprint for making computer audition a staple of modern
healthcare [89.8799665638295]
Recent years have seen a rapid increase in digital medicine research in an attempt to transform traditional healthcare systems.
Computer audition can be seen to be lagging behind, at least in terms of commercial interest.
We categorise the advances needed in four key pillars: Hear, corresponding to the cornerstone technologies needed to analyse auditory signals in real-life conditions; Earlier, for the advances needed in computational and data efficiency; Attentively, for accounting to individual differences and handling the longitudinal nature of medical data.
arXiv Detail & Related papers (2023-01-25T09:25:08Z) - Heart Abnormality Detection from Heart Sound Signals using MFCC Feature
and Dual Stream Attention Based Network [0.0]
We propose a novel deep learning based dual stream network with attention mechanism that uses both the raw heart sound signal and the MFCC features to detect abnormality in heart condition of a patient.
The model is trained on the largest publicly available dataset of PCG signal and achieves an accuracy of 87.11, sensitivity of 82.41, specificty of 91.8 and a MACC of 87.12.
arXiv Detail & Related papers (2022-11-17T18:20:46Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - ECG-Based Heart Arrhythmia Diagnosis Through Attentional Convolutional
Neural Networks [9.410102957429705]
We propose Attention-Based Convolutional Neural Networks (ABCNN) to work on the raw ECG signals and automatically extract the informative dependencies for accurate arrhythmia detection.
Our main task is to find the arrhythmia from normal heartbeats and, at the meantime, accurately recognize the heart diseases from five arrhythmia types.
The experimental results show that the proposed ABCNN outperforms the widely used baselines.
arXiv Detail & Related papers (2021-08-18T14:55:46Z) - Project Achoo: A Practical Model and Application for COVID-19 Detection
from Recordings of Breath, Voice, and Cough [55.45063681652457]
We propose a machine learning method to quickly triage COVID-19 using recordings made on consumer devices.
The approach combines signal processing methods with fine-tuned deep learning networks and provides methods for signal denoising, cough detection and classification.
We have also developed and deployed a mobile application that uses symptoms checker together with voice, breath and cough signals to detect COVID-19 infection.
arXiv Detail & Related papers (2021-07-12T08:07:56Z) - Spotting adversarial samples for speaker verification by neural vocoders [102.1486475058963]
We adopt neural vocoders to spot adversarial samples for automatic speaker verification (ASV)
We find that the difference between the ASV scores for the original and re-synthesize audio is a good indicator for discrimination between genuine and adversarial samples.
Our codes will be made open-source for future works to do comparison.
arXiv Detail & Related papers (2021-07-01T08:58:16Z) - Heart Sound Classification Considering Additive Noise and Convolutional
Distortion [2.63046959939306]
Automatic analysis of heart sounds for abnormality detection is faced with the challenges of additive noise and sensor-dependent degradation.
This paper aims to develop methods to address the cardiac abnormality detection problem when both types of distortions are present in the cardiac auscultation sound.
The proposed method paves the way towards developing computer-aided cardiac auscultation systems in noisy environments using low-cost stethoscopes.
arXiv Detail & Related papers (2021-06-03T14:09:04Z) - Noise-Resilient Automatic Interpretation of Holter ECG Recordings [67.59562181136491]
We present a three-stage process for analysing Holter recordings with robustness to noisy signal.
First stage is a segmentation neural network (NN) with gradientdecoder architecture which detects positions of heartbeats.
Second stage is a classification NN which will classify heartbeats as wide or narrow.
Third stage is a boosting decision trees (GBDT) on top of NN features that incorporates patient-wise features.
arXiv Detail & Related papers (2020-11-17T16:15:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.