Learning a Shared Model for Motorized Prosthetic Joints to Predict
Ankle-Joint Motion
- URL: http://arxiv.org/abs/2111.07419v1
- Date: Sun, 14 Nov 2021 19:02:40 GMT
- Title: Learning a Shared Model for Motorized Prosthetic Joints to Predict
Ankle-Joint Motion
- Authors: Sharmita Dey, Sabri Boughorbel, Arndt F. Schilling
- Abstract summary: We propose a learning-based shared model for predicting ankle-joint motion for different locomotion modes.
We show that the shared model is adequate for predicting the ankle angles and moments for different locomotion modes without explicitly classifying between the modes.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Control strategies for active prostheses or orthoses use sensor inputs to
recognize the user's locomotive intention and generate corresponding control
commands for producing the desired locomotion. In this paper, we propose a
learning-based shared model for predicting ankle-joint motion for different
locomotion modes like level-ground walking, stair ascent, stair descent, slope
ascent, and slope descent without the need to classify between them. Features
extracted from hip and knee joint angular motion are used to continuously
predict the ankle angles and moments using a Feed-Forward Neural Network-based
shared model. We show that the shared model is adequate for predicting the
ankle angles and moments for different locomotion modes without explicitly
classifying between the modes. The proposed strategy shows the potential for
devising a high-level controller for an intelligent prosthetic ankle that can
adapt to different locomotion modes.
Related papers
- Continual Imitation Learning for Prosthetic Limbs [0.7922558880545526]
Motorized bionic limbs offer promise, but their utility depends on mimicking the evolving synergy of human movement in various settings.
We present a novel model for bionic prostheses' application that leverages camera-based motion capture and wearable sensor data.
We propose a model that can multitask, adapt continually, anticipate movements, and refine locomotion.
arXiv Detail & Related papers (2024-05-02T09:22:54Z) - InterControl: Zero-shot Human Interaction Generation by Controlling Every Joint [67.6297384588837]
We introduce a novel controllable motion generation method, InterControl, to encourage the synthesized motions maintaining the desired distance between joint pairs.
We demonstrate that the distance between joint pairs for human-wise interactions can be generated using an off-the-shelf Large Language Model.
arXiv Detail & Related papers (2023-11-27T14:32:33Z) - STMT: A Spatial-Temporal Mesh Transformer for MoCap-Based Action Recognition [50.064502884594376]
We study the problem of human action recognition using motion capture (MoCap) sequences.
We propose a novel Spatial-Temporal Mesh Transformer (STMT) to directly model the mesh sequences.
The proposed method achieves state-of-the-art performance compared to skeleton-based and point-cloud-based models.
arXiv Detail & Related papers (2023-03-31T16:19:27Z) - Learning Policies for Continuous Control via Transition Models [2.831332389089239]
In robot control, moving an arm's end-effector to a target position or along a target trajectory requires accurate forward and inverse models.
We show that by learning the transition (forward) model from interaction, we can use it to drive the learning of an amortized policy.
arXiv Detail & Related papers (2022-09-16T16:23:48Z) - Weakly-supervised Action Transition Learning for Stochastic Human Motion
Prediction [81.94175022575966]
We introduce the task of action-driven human motion prediction.
It aims to predict multiple plausible future motions given a sequence of action labels and a short motion history.
arXiv Detail & Related papers (2022-05-31T08:38:07Z) - VAE-Loco: Versatile Quadruped Locomotion by Learning a Disentangled Gait
Representation [78.92147339883137]
We show that it is pivotal in increasing controller robustness by learning a latent space capturing the key stance phases constituting a particular gait.
We demonstrate that specific properties of the drive signal map directly to gait parameters such as cadence, footstep height and full stance duration.
The use of a generative model facilitates the detection and mitigation of disturbances to provide a versatile and robust planning framework.
arXiv Detail & Related papers (2022-05-02T19:49:53Z) - An Adaptable Approach to Learn Realistic Legged Locomotion without
Examples [38.81854337592694]
This work proposes a generic approach for ensuring realism in locomotion by guiding the learning process with the spring-loaded inverted pendulum model as a reference.
We present experimental results showing that even in a model-free setup, the learned policies can generate realistic and energy-efficient locomotion gaits for a bipedal and a quadrupedal robot.
arXiv Detail & Related papers (2021-10-28T10:14:47Z) - GLiDE: Generalizable Quadrupedal Locomotion in Diverse Environments with
a Centroidal Model [18.66472547798549]
We show how model-free reinforcement learning can be effectively used with a centroidal model to generate robust control policies for quadrupedal locomotion.
We show the potential of the method by demonstrating stepping-stone locomotion, two-legged in-place balance, balance beam locomotion, and sim-to-real transfer without further adaptations.
arXiv Detail & Related papers (2021-04-20T05:55:13Z) - Bidirectional Interaction between Visual and Motor Generative Models
using Predictive Coding and Active Inference [68.8204255655161]
We propose a neural architecture comprising a generative model for sensory prediction, and a distinct generative model for motor trajectories.
We highlight how sequences of sensory predictions can act as rails guiding learning, control and online adaptation of motor trajectories.
arXiv Detail & Related papers (2021-04-19T09:41:31Z) - Reinforcement Learning for Robust Parameterized Locomotion Control of
Bipedal Robots [121.42930679076574]
We present a model-free reinforcement learning framework for training robust locomotion policies in simulation.
domain randomization is used to encourage the policies to learn behaviors that are robust across variations in system dynamics.
We demonstrate this on versatile walking behaviors such as tracking a target walking velocity, walking height, and turning yaw.
arXiv Detail & Related papers (2021-03-26T07:14:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.