IMU Based Deep Stride Length Estimation With Self-Supervised Learning
- URL: http://arxiv.org/abs/2205.02977v1
- Date: Fri, 6 May 2022 01:48:39 GMT
- Title: IMU Based Deep Stride Length Estimation With Self-Supervised Learning
- Authors: Jien-De Sui and Tian-Sheuan Chang
- Abstract summary: This paper proposes a single convolutional neural network (CNN) model to predict stride length of running and walking and classify the running or walking type per stride.
The proposed model can achieve better average percent error, 4.78%, on running and walking stride length regression and 99.83% accuracy on running and walking classification, when compared to the previous approach.
- Score: 0.1246030133914898
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Stride length estimation using inertial measurement unit (IMU) sensors is
getting popular recently as one representative gait parameter for health care
and sports training. The traditional estimation method requires some explicit
calibrations and design assumptions. Current deep learning methods suffer from
few labeled data problem. To solve above problems, this paper proposes a single
convolutional neural network (CNN) model to predict stride length of running
and walking and classify the running or walking type per stride. The model
trains its pretext task with self-supervised learning on a large unlabeled
dataset for feature learning, and its downstream task on the stride length
estimation and classification tasks with supervised learning with a small
labeled dataset. The proposed model can achieve better average percent error,
4.78\%, on running and walking stride length regression and 99.83\% accuracy on
running and walking classification, when compared to the previous approach,
7.44\% on the stride length estimation.
Related papers
- Few-Shot Load Forecasting Under Data Scarcity in Smart Grids: A Meta-Learning Approach [0.18641315013048293]
This paper proposes adapting an established model-agnostic meta-learning algorithm for short-term load forecasting.
The proposed method can rapidly adapt and generalize within any unknown load time series of arbitrary length.
The proposed model is evaluated using a dataset of historical load consumption data from real-world consumers.
arXiv Detail & Related papers (2024-06-09T18:59:08Z) - A Lightweight Measure of Classification Difficulty from Application Dataset Characteristics [4.220363193932374]
We propose an efficient cosine similarity-based classification difficulty measure S.
It is calculated from the number of classes and intra- and inter-class similarity metrics of the dataset.
We show how a practitioner can use this measure to help select an efficient model 6 to 29x faster than through repeated training and testing.
arXiv Detail & Related papers (2024-04-09T03:27:09Z) - KAKURENBO: Adaptively Hiding Samples in Deep Neural Network Training [2.8804804517897935]
We propose a method for hiding the least-important samples during the training of deep neural networks.
We adaptively find samples to exclude in a given epoch based on their contribution to the overall learning process.
Our method can reduce total training time by up to 22% impacting accuracy only by 0.4% compared to the baseline.
arXiv Detail & Related papers (2023-10-16T06:19:29Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - Efficient human-in-loop deep learning model training with iterative
refinement and statistical result validation [0.0]
We demonstrate a method for creating segmentations, a necessary part of a data cleaning for ultrasound imaging machine learning pipelines.
We propose a four-step method to leverage automatically generated training data and fast human visual checks to improve model accuracy while keeping the time/effort and cost low.
The method is demonstrated on a cardiac ultrasound segmentation task, removing background data, including static PHI.
arXiv Detail & Related papers (2023-04-03T13:56:01Z) - A Meta-Learning Approach to Predicting Performance and Data Requirements [163.4412093478316]
We propose an approach to estimate the number of samples required for a model to reach a target performance.
We find that the power law, the de facto principle to estimate model performance, leads to large error when using a small dataset.
We introduce a novel piecewise power law (PPL) that handles the two data differently.
arXiv Detail & Related papers (2023-03-02T21:48:22Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Fast Uncertainty Quantification for Deep Object Pose Estimation [91.09217713805337]
Deep learning-based object pose estimators are often unreliable and overconfident.
In this work, we propose a simple, efficient, and plug-and-play UQ method for 6-DoF object pose estimation.
arXiv Detail & Related papers (2020-11-16T06:51:55Z) - Uncertainty-aware Self-training for Text Classification with Few Labels [54.13279574908808]
We study self-training as one of the earliest semi-supervised learning approaches to reduce the annotation bottleneck.
We propose an approach to improve self-training by incorporating uncertainty estimates of the underlying neural network.
We show our methods leveraging only 20-30 labeled samples per class for each task for training and for validation can perform within 3% of fully supervised pre-trained language models.
arXiv Detail & Related papers (2020-06-27T08:13:58Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z) - Deep Learning and Statistical Models for Time-Critical Pedestrian
Behaviour Prediction [5.593571255686115]
We show that, though the neural network model achieves an accuracy of 80%, it requires long sequences to achieve this (100 samples or more)
The SLDS, has a lower accuracy of 74%, but it achieves this result with short sequences (10 samples)
The results provide a key intuition of the suitability of the models in time-critical problems.
arXiv Detail & Related papers (2020-02-26T00:05:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.