Knowledge-Based Deep Learning for Time-Efficient Inverse Dynamics
- URL: http://arxiv.org/abs/2412.05403v1
- Date: Fri, 06 Dec 2024 20:12:52 GMT
- Title: Knowledge-Based Deep Learning for Time-Efficient Inverse Dynamics
- Authors: Shuhao Ma, Yu Cao, Ian D. Robertson, Chaoyang Shi, Jindong Liu, Zhi-Qiang Zhang,
- Abstract summary: We propose a knowledge-based deep learning framework for time-efficient inverse dynamic analysis.
The BiGRU neural network is selected as the backbone of our model due to its proficient handling of time-series data.
The experimental results have shown that the selected BiGRU architecture outperforms other neural network models when trained using our specifically designed loss function.
- Score: 5.78355428732981
- License:
- Abstract: Accurate understanding of muscle activation and muscle forces plays an essential role in neuro-rehabilitation and musculoskeletal disorder treatments. Computational musculoskeletal modeling has been widely used as a powerful non-invasive tool to estimate them through inverse dynamics using static optimization, but the inherent computational complexity results in time-consuming analysis. In this paper, we propose a knowledge-based deep learning framework for time-efficient inverse dynamic analysis, which can predict muscle activation and muscle forces from joint kinematic data directly while not requiring any label information during model training. The Bidirectional Gated Recurrent Unit (BiGRU) neural network is selected as the backbone of our model due to its proficient handling of time-series data. Prior physical knowledge from forward dynamics and pre-selected inverse dynamics based physiological criteria are integrated into the loss function to guide the training of neural networks. Experimental validations on two datasets, including one benchmark upper limb movement dataset and one self-collected lower limb movement dataset from six healthy subjects, are performed. The experimental results have shown that the selected BiGRU architecture outperforms other neural network models when trained using our specifically designed loss function, which illustrates the effectiveness and robustness of the proposed framework.
Related papers
- Physics-informed Deep Learning for Muscle Force Prediction with Unlabeled sEMG Signals [4.382876444149811]
This paper presents a physics-informed deep learning method to predict muscle forces without any label information during model training.
In addition, the proposed method could also identify personalized muscle-tendon parameters.
The predicted results of muscle forces show comparable or even lower root mean square error (RMSE) and higher coefficient of determination compared with baseline methods.
arXiv Detail & Related papers (2024-12-05T14:47:38Z) - Neural Dynamics Model of Visual Decision-Making: Learning from Human Experts [28.340344705437758]
We implement a comprehensive visual decision-making model that spans from visual input to behavioral output.
Our model aligns closely with human behavior and reflects neural activities in primates.
A neuroimaging-informed fine-tuning approach was introduced and applied to the model, leading to performance improvements.
arXiv Detail & Related papers (2024-09-04T02:38:52Z) - Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - The bionic neural network for external simulation of human locomotor
system [2.6311880922890842]
This paper proposes a physics-informed deep learning method based on musculoskeletal (MSK) modeling to predict joint motion and muscle forces.
The method can effectively identify subject-specific MSK physiological parameters and the trained physics-informed forward-dynamics surrogate yields accurate motion and muscle forces predictions.
arXiv Detail & Related papers (2023-09-11T23:02:56Z) - A Physics-Informed Low-Shot Learning For sEMG-Based Estimation of Muscle
Force and Joint Kinematics [4.878073267556235]
Muscle force and joint kinematics estimation from surface electromyography (sEMG) are essential for real-time biomechanical analysis.
Recent advances in deep neural networks (DNNs) have shown the potential to improve biomechanical analysis in a fully automated and reproducible manner.
This paper presents a novel physics-informed low-shot learning method for sEMG-based estimation of muscle force and joint kinematics.
arXiv Detail & Related papers (2023-07-08T23:01:12Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Leveraging the structure of dynamical systems for data-driven modeling [111.45324708884813]
We consider the impact of the training set and its structure on the quality of the long-term prediction.
We show how an informed design of the training set, based on invariants of the system and the structure of the underlying attractor, significantly improves the resulting models.
arXiv Detail & Related papers (2021-12-15T20:09:20Z) - Evaluating deep transfer learning for whole-brain cognitive decoding [11.898286908882561]
Transfer learning (TL) is well-suited to improve the performance of deep learning (DL) models in datasets with small numbers of samples.
Here, we evaluate TL for the application of DL models to the decoding of cognitive states from whole-brain functional Magnetic Resonance Imaging (fMRI) data.
arXiv Detail & Related papers (2021-11-01T15:44:49Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Recurrent Neural Network Learning of Performance and Intrinsic
Population Dynamics from Sparse Neural Data [77.92736596690297]
We introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics.
We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model.
Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons.
arXiv Detail & Related papers (2020-05-05T14:16:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.