A Gait Triaging Toolkit for Overlapping Acoustic Events in Indoor
Environments
- URL: http://arxiv.org/abs/2211.05944v1
- Date: Fri, 11 Nov 2022 01:33:14 GMT
- Title: A Gait Triaging Toolkit for Overlapping Acoustic Events in Indoor
Environments
- Authors: Kelvin Summoogum, Debayan Das, Parvati Jayakumar
- Abstract summary: We propose a novel machine learning based filter which can triage gait audio samples suitable for training machine learning models for gait detection.
To demonstrate the effectiveness of the filter, we train and evaluate a deep learning model on gait datasets collected from older adults with and without applying the filter.
The proposed filter will help automate the task of manual annotation of gait samples for training acoustic based gait detection models for older adults in indoor environments.
- Score: 0.1933681537640272
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Gait has been used in clinical and healthcare applications to assess the
physical and cognitive health of older adults. Acoustic based gait detection is
a promising approach to collect gait data of older adults passively and
non-intrusively. However, there has been limited work in developing acoustic
based gait detectors that can operate in noisy polyphonic acoustic scenes of
homes and care homes. We attribute this to the lack of good quality gait
datasets from the real-world to train a gait detector on. In this paper, we put
forward a novel machine learning based filter which can triage gait audio
samples suitable for training machine learning models for gait detection. The
filter achieves this by eliminating noisy samples at an f(1) score of 0.85 and
prioritising gait samples with distinct spectral features and minimal noise. To
demonstrate the effectiveness of the filter, we train and evaluate a deep
learning model on gait datasets collected from older adults with and without
applying the filter. The model registers an increase of 25 points in its f(1)
score on unseen real-word gait data when trained with the filtered gait
samples. The proposed filter will help automate the task of manual annotation
of gait samples for training acoustic based gait detection models for older
adults in indoor environments.
Related papers
- Towards Open Respiratory Acoustic Foundation Models: Pretraining and Benchmarking [27.708473070563013]
Respiratory audio has predictive power for a wide range of healthcare applications, yet is currently under-explored.
We introduce OPERA, an OPEn Respiratory Acoustic foundation model pretraining and benchmarking system.
arXiv Detail & Related papers (2024-06-23T16:04:26Z) - Real Acoustic Fields: An Audio-Visual Room Acoustics Dataset and Benchmark [65.79402756995084]
Real Acoustic Fields (RAF) is a new dataset that captures real acoustic room data from multiple modalities.
RAF is the first dataset to provide densely captured room acoustic data.
arXiv Detail & Related papers (2024-03-27T17:59:56Z) - Combating Label Noise With A General Surrogate Model For Sample
Selection [84.61367781175984]
We propose to leverage the vision-language surrogate model CLIP to filter noisy samples automatically.
We validate the effectiveness of our proposed method on both real-world and synthetic noisy datasets.
arXiv Detail & Related papers (2023-10-16T14:43:27Z) - NASTAR: Noise Adaptive Speech Enhancement with Target-Conditional
Resampling [34.565077865854484]
We propose noise adaptive speech enhancement with target-conditional resampling (NASTAR)
NASTAR uses a feedback mechanism to simulate adaptive training data via a noise extractor and a retrieval model.
Experimental results show that NASTAR can effectively use one noisy speech sample to adapt an SE model to a target condition.
arXiv Detail & Related papers (2022-06-18T00:15:48Z) - Cough Detection Using Selected Informative Features from Audio Signals [24.829135966052142]
The models are trained by the dataset combined ESC-50 dataset with self-recorded cough recordings.
The best cough detection model realizes the accuracy, recall, precision and F1-score with 94.9%, 97.1%, 93.1% and 0.95 respectively.
arXiv Detail & Related papers (2021-08-07T23:05:18Z) - Project Achoo: A Practical Model and Application for COVID-19 Detection
from Recordings of Breath, Voice, and Cough [55.45063681652457]
We propose a machine learning method to quickly triage COVID-19 using recordings made on consumer devices.
The approach combines signal processing methods with fine-tuned deep learning networks and provides methods for signal denoising, cough detection and classification.
We have also developed and deployed a mobile application that uses symptoms checker together with voice, breath and cough signals to detect COVID-19 infection.
arXiv Detail & Related papers (2021-07-12T08:07:56Z) - Exploring Self-Supervised Representation Ensembles for COVID-19 Cough
Classification [5.469841541565308]
We propose a novel self-supervised learning enabled framework for COVID-19 cough classification.
A contrastive pre-training phase is introduced to train a Transformer-based feature encoder with unlabelled data.
We show that the proposed contrastive pre-training, the random masking mechanism, and the ensemble architecture contribute to improving cough classification performance.
arXiv Detail & Related papers (2021-05-17T01:27:20Z) - Discriminative Singular Spectrum Classifier with Applications on
Bioacoustic Signal Recognition [67.4171845020675]
We present a bioacoustic signal classifier equipped with a discriminative mechanism to extract useful features for analysis and classification efficiently.
Unlike current bioacoustic recognition methods, which are task-oriented, the proposed model relies on transforming the input signals into vector subspaces.
The validity of the proposed method is verified using three challenging bioacoustic datasets containing anuran, bee, and mosquito species.
arXiv Detail & Related papers (2021-03-18T11:01:21Z) - Virufy: A Multi-Branch Deep Learning Network for Automated Detection of
COVID-19 [1.9899603776429056]
Researchers have successfully presented models for detecting COVID-19 infection status using audio samples recorded in clinical settings.
We propose a multi-branch deep learning network that is trained and tested on crowdsourced data where most of the data has not been manually processed and cleaned.
arXiv Detail & Related papers (2021-03-02T15:31:09Z) - Detecting COVID-19 from Breathing and Coughing Sounds using Deep Neural
Networks [68.8204255655161]
We adapt an ensemble of Convolutional Neural Networks to classify if a speaker is infected with COVID-19 or not.
Ultimately, it achieves an Unweighted Average Recall (UAR) of 74.9%, or an Area Under ROC Curve (AUC) of 80.7% by ensembling neural networks.
arXiv Detail & Related papers (2020-12-29T01:14:17Z) - Unsupervised Domain Adaptation for Acoustic Scene Classification Using
Band-Wise Statistics Matching [69.24460241328521]
Machine learning algorithms can be negatively affected by mismatches between training (source) and test (target) data distributions.
We propose an unsupervised domain adaptation method that consists of aligning the first- and second-order sample statistics of each frequency band of target-domain acoustic scenes to the ones of the source-domain training dataset.
We show that the proposed method outperforms the state-of-the-art unsupervised methods found in the literature in terms of both source- and target-domain classification accuracy.
arXiv Detail & Related papers (2020-04-30T23:56:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.