GCCRR: A Short Sequence Gait Cycle Segmentation Method Based on Ear-Worn IMU
- URL: http://arxiv.org/abs/2409.00983v2
- Date: Mon, 30 Sep 2024 10:17:51 GMT
- Title: GCCRR: A Short Sequence Gait Cycle Segmentation Method Based on Ear-Worn IMU
- Authors: Zhenye Xu, Yao Guo,
- Abstract summary: This paper addresses the critical task of gait cycle segmentation using short sequences from ear-worn IMUs.
We introduce the Gait Characteristic Curve Regression and Restoration (GCCRR) method, a novel two-stage approach for fine-grained gait phase segmentation.
Our method employs Bi-LSTM-based deep learning algorithms for regression to ensure reliable segmentation for short gait sequences.
- Score: 3.2428847882267404
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper addresses the critical task of gait cycle segmentation using short sequences from ear-worn IMUs, a practical and non-invasive approach for home-based monitoring and rehabilitation of patients with impaired motor function. While previous studies have focused on IMUs positioned on the lower limbs, ear-worn IMUs offer a unique advantage in capturing gait dynamics with minimal intrusion. To address the challenges of gait cycle segmentation using short sequences, we introduce the Gait Characteristic Curve Regression and Restoration (GCCRR) method, a novel two-stage approach designed for fine-grained gait phase segmentation. The first stage transforms the segmentation task into a regression task on the Gait Characteristic Curve (GCC), which is a one-dimensional feature sequence incorporating periodic information. The second stage restores the gait cycle using peak detection techniques. Our method employs Bi-LSTM-based deep learning algorithms for regression to ensure reliable segmentation for short gait sequences. Evaluation on the HamlynGait dataset demonstrates that GCCRR achieves over 80\% Accuracy, with a Timestamp Error below one sampling interval. Despite its promising results, the performance lags behind methods using more extensive sensor systems, highlighting the need for larger, more diverse datasets. Future work will focus on data augmentation using motion capture systems and improving algorithmic generalizability.
Related papers
- Cell as Point: One-Stage Framework for Efficient Cell Tracking [54.19259129722988]
This paper proposes the novel end-to-end CAP framework to achieve efficient and stable cell tracking in one stage.
CAP abandons detection or segmentation stages and simplifies the process by exploiting the correlation among the trajectories of cell points to track cells jointly.
Cap demonstrates strong cell tracking performance while also being 10 to 55 times more efficient than existing methods.
arXiv Detail & Related papers (2024-11-22T10:16:35Z) - Direct Cardiac Segmentation from Undersampled K-space Using Transformers [10.079819435628579]
We introduce a novel approach to deriving segmentations from sparse k-space samples using a transformer (DiSK)
Our model consistently outperforms the baselines in Dice and Hausdorff distances across foreground classes for all presented sampling rates.
arXiv Detail & Related papers (2024-05-31T20:54:12Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - Robust Fully-Asynchronous Methods for Distributed Training over General Architecture [11.480605289411807]
Perfect synchronization in distributed machine learning problems is inefficient and even impossible due to the existence of latency, package losses and stragglers.
We propose Fully-Asynchronous Gradient Tracking method (R-FAST), where each device performs local computation and communication at its own without any form of impact.
arXiv Detail & Related papers (2023-07-21T14:36:40Z) - Graph Signal Sampling for Inductive One-Bit Matrix Completion: a
Closed-form Solution [112.3443939502313]
We propose a unified graph signal sampling framework which enjoys the benefits of graph signal analysis and processing.
The key idea is to transform each user's ratings on the items to a function (signal) on the vertices of an item-item graph.
For the online setting, we develop a Bayesian extension, i.e., BGS-IMC which considers continuous random Gaussian noise in the graph Fourier domain.
arXiv Detail & Related papers (2023-02-08T08:17:43Z) - Longitudinal detection of new MS lesions using Deep Learning [0.0]
We describe a deep-learning-based pipeline addressing the task of detecting and segmenting new MS lesions.
First, we propose to use transfer-learning from a model trained on a segmentation task using single time-points.
Second, we propose a data synthesis strategy to generate realistic longitudinal time-points with new lesions.
arXiv Detail & Related papers (2022-06-16T16:09:04Z) - Randomized Stochastic Gradient Descent Ascent [37.887266927498395]
An increasing number of machine learning problems, such as robust or adversarial variants of existing algorithms, require minimizing a loss function.
We propose RSGDA (Randomized SGD), a variant of ESGDA with loop size with a simpler theoretical analysis.
arXiv Detail & Related papers (2021-11-25T16:44:19Z) - Least Squares Regression with Markovian Data: Fundamental Limits and
Algorithms [69.45237691598774]
We study the problem of least squares linear regression where the data-points are dependent and are sampled from a Markov chain.
We establish sharp information theoretic minimax lower bounds for this problem in terms of $tau_mathsfmix$.
We propose an algorithm based on experience replay--a popular reinforcement learning technique--that achieves a significantly better error rate.
arXiv Detail & Related papers (2020-06-16T04:26:50Z) - MLE-guided parameter search for task loss minimization in neural
sequence modeling [83.83249536279239]
Neural autoregressive sequence models are used to generate sequences in a variety of natural language processing (NLP) tasks.
We propose maximum likelihood guided parameter search (MGS), which samples from a distribution over update directions that is a mixture of random search around the current parameters and around the maximum likelihood gradient.
Our experiments show that MGS is capable of optimizing sequence-level losses, with substantial reductions in repetition and non-termination in sequence completion, and similar improvements to those of minimum risk training in machine translation.
arXiv Detail & Related papers (2020-06-04T22:21:22Z) - Longitudinal Deep Kernel Gaussian Process Regression [16.618767289437905]
We introduce Longitudinal deep kernel process regression (L-DKGPR)
L-DKGPR automates the discovery of complex multilevel correlation structure from longitudinal data.
We derive an efficient algorithm to train L-DKGPR using latent space inducing points and variational inference.
arXiv Detail & Related papers (2020-05-24T15:10:48Z) - Learning to Optimize Non-Rigid Tracking [54.94145312763044]
We employ learnable optimizations to improve robustness and speed up solver convergence.
First, we upgrade the tracking objective by integrating an alignment data term on deep features which are learned end-to-end through CNN.
Second, we bridge the gap between the preconditioning technique and learning method by introducing a ConditionNet which is trained to generate a preconditioner.
arXiv Detail & Related papers (2020-03-27T04:40:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.