Reproducible Machine Learning-based Voice Pathology Detection: Introducing the Pitch Difference Feature
- URL: http://arxiv.org/abs/2410.10537v2
- Date: Mon, 03 Feb 2025 12:57:09 GMT
- Title: Reproducible Machine Learning-based Voice Pathology Detection: Introducing the Pitch Difference Feature
- Authors: Jan Vrba, Jakub Steinbach, Tomáš Jirsa, Laura Verde, Roberta De Fazio, Yuwen Zeng, Kei Ichiji, Lukáš Hájek, Zuzana Sedláková, Zuzana Urbániová, Martin Chovanec, Jan Mareš, Noriyasu Homma,
- Abstract summary: This study introduces a novel methodology for voice pathology detection using the publicly available Saarbr"ucken Voice Database (SVD) database.
We evaluate six machine learning (ML) classifiers and apply K-Means SMOTE to address class imbalance.
Our approach outstanding performance, reaching 85.61%, 84.69% and 85.22% unweighted average recall (UAR) for females, males and combined results respectivelly.
- Score: 1.7779568951268254
- License:
- Abstract: This study introduces a novel methodology for voice pathology detection using the publicly available Saarbr\"ucken Voice Database (SVD) database and a robust feature set combining commonly used acoustic handcrafted features with two novel ones: pitch difference (relative variation in fundamental frequency) and a NaN feature (failed fundamental frequency estimation). We evaluate six machine learning (ML) classifiers - support vector machine, k-nearest neighbors, naive Bayes, decision tree, random forest, and AdaBoost - using grid search for feasible hyperparameters of selected classifiers and 20480 different feature subsets. Top 1000 classifier-feature subset combinations for each classifier type are validated with repeated stratified cross-validation. To address class imbalance, we apply K-Means SMOTE to augment the training data. Our approach achieves outstanding performance, reaching 85.61%, 84.69% and 85.22% unweighted average recall (UAR) for females, males and combined results respectivelly. We intentionally omit accuracy as it is a highly biased metric for imbalanced data. This advancement demonstrates significant potential for clinical deployment of ML methods, offering a valuable supportive tool for an objective examination of voice pathologies. To enable an easier use of our methodology and to support our claims, we provide a publicly available GitHub repository with DOI 10.5281/zenodo.13771573. Finally, we provide a REFORMS checklist to enhance readability, reproducibility and justification of our approach.
Related papers
- AFEN: Respiratory Disease Classification using Ensemble Learning [2.524195881002773]
We present AFEN (Audio Feature Learning), a model that leverages Convolutional Neural Networks (CNN) and XGBoost.
We use a meticulously selected mix of audio features which provide the salient attributes of the data and allow for accurate classification.
We empirically verify that AFEN sets a new state-of-theart using Precision and Recall as metrics, while decreasing training time by 60%.
arXiv Detail & Related papers (2024-05-08T23:50:54Z) - Speaker-Independent Dysarthria Severity Classification using
Self-Supervised Transformers and Multi-Task Learning [2.7706924578324665]
This study presents a transformer-based framework for automatically assessing dysarthria severity from raw speech data.
We develop a framework, called Speaker-Agnostic Latent Regularisation (SALR), incorporating a multi-task learning objective and contrastive learning for speaker-independent multi-class dysarthria severity classification.
Our model demonstrated superior performance over traditional machine learning approaches, with an accuracy of $70.48%$ and an F1 score of $59.23%$.
arXiv Detail & Related papers (2024-02-29T18:30:52Z) - Detecting Speech Abnormalities with a Perceiver-based Sequence
Classifier that Leverages a Universal Speech Model [4.503292461488901]
We propose a Perceiver-based sequence to detect abnormalities in speech reflective of several neurological disorders.
We combine this sequence with a Universal Speech Model (USM) that is trained (unsupervised) on 12 million hours of diverse audio recordings.
Our model outperforms standard transformer (80.9%) and perceiver (81.8%) models and achieves an average accuracy of 83.1%.
arXiv Detail & Related papers (2023-10-16T21:07:12Z) - Intra-class Adaptive Augmentation with Neighbor Correction for Deep
Metric Learning [99.14132861655223]
We propose a novel intra-class adaptive augmentation (IAA) framework for deep metric learning.
We reasonably estimate intra-class variations for every class and generate adaptive synthetic samples to support hard samples mining.
Our method significantly improves and outperforms the state-of-the-art methods on retrieval performances by 3%-6%.
arXiv Detail & Related papers (2022-11-29T14:52:38Z) - Deep Feature Learning for Medical Acoustics [78.56998585396421]
The purpose of this paper is to compare different learnables in medical acoustics tasks.
A framework has been implemented to classify human respiratory sounds and heartbeats in two categories, i.e. healthy or affected by pathologies.
arXiv Detail & Related papers (2022-08-05T10:39:37Z) - Decision Forest Based EMG Signal Classification with Low Volume Dataset
Augmented with Random Variance Gaussian Noise [51.76329821186873]
We produce a model that can classify six different hand gestures with a limited number of samples that generalizes well to a wider audience.
We appeal to a set of more elementary methods such as the use of random bounds on a signal, but desire to show the power these methods can carry in an online setting.
arXiv Detail & Related papers (2022-06-29T23:22:18Z) - Robust Meta-learning with Sampling Noise and Label Noise via
Eigen-Reptile [78.1212767880785]
meta-learner is prone to overfitting since there are only a few available samples.
When handling the data with noisy labels, the meta-learner could be extremely sensitive to label noise.
We present Eigen-Reptile (ER) that updates the meta- parameters with the main direction of historical task-specific parameters.
arXiv Detail & Related papers (2022-06-04T08:48:02Z) - Prototypical Classifier for Robust Class-Imbalanced Learning [64.96088324684683]
We propose textitPrototypical, which does not require fitting additional parameters given the embedding network.
Prototypical produces balanced and comparable predictions for all classes even though the training set is class-imbalanced.
We test our method on CIFAR-10LT, CIFAR-100LT and Webvision datasets, observing that Prototypical obtains substaintial improvements compared with state of the arts.
arXiv Detail & Related papers (2021-10-22T01:55:01Z) - Visualizing Classifier Adjacency Relations: A Case Study in Speaker
Verification and Voice Anti-Spoofing [72.4445825335561]
We propose a simple method to derive 2D representation from detection scores produced by an arbitrary set of binary classifiers.
Based upon rank correlations, our method facilitates a visual comparison of classifiers with arbitrary scores.
While the approach is fully versatile and can be applied to any detection task, we demonstrate the method using scores produced by automatic speaker verification and voice anti-spoofing systems.
arXiv Detail & Related papers (2021-06-11T13:03:33Z) - Machine Learning Based on Natural Language Processing to Detect Cardiac
Failure in Clinical Narratives [0.2936007114555107]
The purpose of the study is to develop a machine learning algorithm that automatically detects whether a patient has a cardiac failure or a healthy condition.
A word representation learning technique was employed by using bag-of-word (BoW), term frequency inverse document frequency (TFIDF), and neural word embeddings (word2vec)
The proposed framework yielded an overall classification performance with acc, pre, rec, and f1 of 84% and 82%, 85%, and 83%, respectively.
arXiv Detail & Related papers (2021-04-08T17:28:43Z) - Data augmentation using generative networks to identify dementia [20.137419355252362]
We show that generative models can be used as an effective approach for data augmentation.
In this paper, we investigate the application of a similar approach to different types of speech and audio-based features extracted from our automatic dementia detection system.
arXiv Detail & Related papers (2020-04-13T15:05:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.