Wearable-based Classification of Running Styles with Deep Learning
- URL: http://arxiv.org/abs/2109.00594v1
- Date: Wed, 1 Sep 2021 19:55:06 GMT
- Title: Wearable-based Classification of Running Styles with Deep Learning
- Authors: Setareh Rahimi Taghanaki, Michael Rainbow, Ali Etemad
- Abstract summary: We develop a system capable of classifying running styles using wearables.
Five wearable devices are used to record accelerometer data from different parts of the lower body.
We show that the proposed model is capable of automatically classifying different running styles in a subject-dependant manner.
- Score: 8.422257363944295
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Automatic classification of running styles can enable runners to obtain
feedback with the aim of optimizing performance in terms of minimizing energy
expenditure, fatigue, and risk of injury. To develop a system capable of
classifying running styles using wearables, we collect a dataset from 10
healthy runners performing 8 different pre-defined running styles. Five
wearable devices are used to record accelerometer data from different parts of
the lower body, namely left and right foot, left and right medial tibia, and
lower back. Using the collected dataset, we develop a deep learning solution
which consists of a Convolutional Neural Network and Long Short-Term Memory
network to first automatically extract effective features, followed by learning
temporal relationships. Score-level fusion is used to aggregate the
classification results from the different sensors. Experiments show that the
proposed model is capable of automatically classifying different running styles
in a subject-dependant manner, outperforming several classical machine learning
methods (following manual feature extraction) and a convolutional neural
network baseline. Moreover, our study finds that subject-independent
classification of running styles is considerably more challenging than a
subject-dependant scheme, indicating a high level of personalization in such
running styles. Finally, we demonstrate that by fine-tuning the model with as
few as 5% subject-specific samples, considerable performance boost is obtained.
Related papers
- A Two-Phase Recall-and-Select Framework for Fast Model Selection [13.385915962994806]
We propose a two-phase (coarse-recall and fine-selection) model selection framework.
It aims to enhance the efficiency of selecting a robust model by leveraging the models' training performances on benchmark datasets.
It has been demonstrated that the proposed methodology facilitates the selection of a high-performing model at a rate about 3x times faster than conventional baseline methods.
arXiv Detail & Related papers (2024-03-28T14:44:44Z) - A Performance-Driven Benchmark for Feature Selection in Tabular Deep
Learning [131.2910403490434]
Data scientists typically collect as many features as possible into their datasets, and even engineer new features from existing ones.
Existing benchmarks for tabular feature selection consider classical downstream models, toy synthetic datasets, or do not evaluate feature selectors on the basis of downstream performance.
We construct a challenging feature selection benchmark evaluated on downstream neural networks including transformers.
We also propose an input-gradient-based analogue of Lasso for neural networks that outperforms classical feature selection methods on challenging problems.
arXiv Detail & Related papers (2023-11-10T05:26:10Z) - Continual Learning in Open-vocabulary Classification with Complementary Memory Systems [19.337633598158778]
We introduce a method for flexible and efficient continual learning in open-vocabulary image classification.
We combine predictions from a CLIP zero-shot model and the exemplar-based model, using the zero-shot estimated probability that a sample's class is within the exemplar classes.
We also propose a "tree probe" method, an adaption of lazy learning principles, which enables fast learning from new examples with competitive accuracy to batch-trained linear models.
arXiv Detail & Related papers (2023-07-04T01:47:34Z) - Revisiting Classifier: Transferring Vision-Language Models for Video
Recognition [102.93524173258487]
Transferring knowledge from task-agnostic pre-trained deep models for downstream tasks is an important topic in computer vision research.
In this study, we focus on transferring knowledge for video classification tasks.
We utilize the well-pretrained language model to generate good semantic target for efficient transferring learning.
arXiv Detail & Related papers (2022-07-04T10:00:47Z) - Feature matching as improved transfer learning technique for wearable
EEG [9.508350808051908]
We propose a new transfer learning strategy as an alternative to the commonly used finetuning approach.
This method consists of training a model with larger amounts of data from the source modality and few paired samples of source and target modality.
We compare feature matching to finetuning for three different target domains, with two different neural network architectures, and with varying amounts of training data.
arXiv Detail & Related papers (2021-12-29T12:07:42Z) - The Devil Is in the Details: An Efficient Convolutional Neural Network
for Transport Mode Detection [3.008051369744002]
Transport mode detection is a classification problem aiming to design an algorithm that can infer the transport mode of a user given multimodal signals.
We show that a small, optimized model can perform as well as a current deep model.
arXiv Detail & Related papers (2021-09-16T08:05:47Z) - Partner-Assisted Learning for Few-Shot Image Classification [54.66864961784989]
Few-shot Learning has been studied to mimic human visual capabilities and learn effective models without the need of exhaustive human annotation.
In this paper, we focus on the design of training strategy to obtain an elemental representation such that the prototype of each novel class can be estimated from a few labeled samples.
We propose a two-stage training scheme, which first trains a partner encoder to model pair-wise similarities and extract features serving as soft-anchors, and then trains a main encoder by aligning its outputs with soft-anchors while attempting to maximize classification performance.
arXiv Detail & Related papers (2021-09-15T22:46:19Z) - STAR: Sparse Transformer-based Action Recognition [61.490243467748314]
This work proposes a novel skeleton-based human action recognition model with sparse attention on the spatial dimension and segmented linear attention on the temporal dimension of data.
Experiments show that our model can achieve comparable performance while utilizing much less trainable parameters and achieve high speed in training and inference.
arXiv Detail & Related papers (2021-07-15T02:53:11Z) - Semi-Supervised Few-Shot Classification with Deep Invertible Hybrid
Models [4.189643331553922]
We propose a deep invertible hybrid model which integrates discriminative and generative learning at a latent space level for semi-supervised few-shot classification.
Our main originality lies in our integration of these components at a latent space level, which is effective in preventing overfitting.
arXiv Detail & Related papers (2021-05-22T05:55:16Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z) - A Trainable Optimal Transport Embedding for Feature Aggregation and its
Relationship to Attention [96.77554122595578]
We introduce a parametrized representation of fixed size, which embeds and then aggregates elements from a given input set according to the optimal transport plan between the set and a trainable reference.
Our approach scales to large datasets and allows end-to-end training of the reference, while also providing a simple unsupervised learning mechanism with small computational cost.
arXiv Detail & Related papers (2020-06-22T08:35:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.