Predictive Modeling of Periodic Behavior for Human-Robot Symbiotic
Walking
- URL: http://arxiv.org/abs/2005.13139v1
- Date: Wed, 27 May 2020 03:30:48 GMT
- Title: Predictive Modeling of Periodic Behavior for Human-Robot Symbiotic
Walking
- Authors: Geoffrey Clark, Joseph Campbell, Seyed Mostafa Rezayat Sorkhabadi,
Wenlong Zhang, Heni Ben Amor
- Abstract summary: We extend Interaction Primitives to periodic movement regimes, i.e., walking.
We show that this model is particularly well-suited for learning data-driven, customized models of human walking.
We also demonstrate how the same framework can be used to learn controllers for a robotic prosthesis.
- Score: 13.68799310875662
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose in this paper Periodic Interaction Primitives - a probabilistic
framework that can be used to learn compact models of periodic behavior. Our
approach extends existing formulations of Interaction Primitives to periodic
movement regimes, i.e., walking. We show that this model is particularly
well-suited for learning data-driven, customized models of human walking, which
can then be used for generating predictions over future states or for inferring
latent, biomechanical variables. We also demonstrate how the same framework can
be used to learn controllers for a robotic prosthesis using an imitation
learning approach. Results in experiments with human participants indicate that
Periodic Interaction Primitives efficiently generate predictions and ankle
angle control signals for a robotic prosthetic ankle, with MAE of 2.21 degrees
in 0.0008s per inference. Performance degrades gracefully in the presence of
noise or sensor fall outs. Compared to alternatives, this algorithm functions
20 times faster and performed 4.5 times more accurately on test subjects.
Related papers
- Spatial-Temporal Graph Diffusion Policy with Kinematic Modeling for Bimanual Robotic Manipulation [88.83749146867665]
Existing approaches learn a policy to predict a distant next-best end-effector pose.
They then compute the corresponding joint rotation angles for motion using inverse kinematics.
We propose Kinematics enhanced Spatial-TemporAl gRaph diffuser.
arXiv Detail & Related papers (2025-03-13T17:48:35Z) - Reciprocal Learning of Intent Inferral with Augmented Visual Feedback for Stroke [2.303526979876375]
We propose a bidirectional paradigm that facilitates human adaptation to an intent inferral classifier.
We demonstrate this paradigm in the context of controlling a robotic hand orthosis for stroke.
Our experiments with stroke subjects show reciprocal learning improving performance in a subset of subjects without negatively impacting performance on the others.
arXiv Detail & Related papers (2024-12-10T22:49:36Z) - Learning Speed-Adaptive Walking Agent Using Imitation Learning with Physics-Informed Simulation [0.0]
We create a skeletal humanoid agent capable of adapting to varying walking speeds while maintaining biomechanically realistic motions.
The framework combines a synthetic data generator, which produces biomechanically plausible gait kinematics from open-source biomechanics data, and a training system that uses adversarial imitation learning to train the agent's walking policy.
arXiv Detail & Related papers (2024-12-05T07:55:58Z) - Unified Dynamic Scanpath Predictors Outperform Individually Trained Neural Models [18.327960366321655]
We develop a deep learning-based social cue integration model for saliency prediction to predict scanpaths in videos.
We evaluate our approach on gaze of dynamic social scenes, observed under the free-viewing condition.
Results indicate that a single unified model, trained on all the observers' scanpaths, performs on par or better than individually trained models.
arXiv Detail & Related papers (2024-05-05T13:15:11Z) - Continual Learning from Simulated Interactions via Multitask Prospective Rehearsal for Bionic Limb Behavior Modeling [0.7922558880545526]
We introduce a model for human behavior in the context of bionic prosthesis control.
We propose a multitasking, continually adaptive model that anticipates and refines movements over time.
We validate our model through experiments on real-world human gait datasets, including transtibial amputees.
arXiv Detail & Related papers (2024-05-02T09:22:54Z) - Neural Interaction Energy for Multi-Agent Trajectory Prediction [55.098754835213995]
We introduce a framework called Multi-Agent Trajectory prediction via neural interaction Energy (MATE)
MATE assesses the interactive motion of agents by employing neural interaction energy.
To bolster temporal stability, we introduce two constraints: inter-agent interaction constraint and intra-agent motion constraint.
arXiv Detail & Related papers (2024-04-25T12:47:47Z) - A Framework for Realistic Simulation of Daily Human Activity [1.8877825068318652]
This paper presents a framework for simulating daily human activity patterns in home environments at scale.
We introduce a method for specifying day-to-day variation in schedules and present a bidirectional constraint propagation algorithm for generating schedules from templates.
arXiv Detail & Related papers (2023-11-26T19:50:23Z) - Value function estimation using conditional diffusion models for control [62.27184818047923]
We propose a simple algorithm called Diffused Value Function (DVF)
It learns a joint multi-step model of the environment-robot interaction dynamics using a diffusion model.
We show how DVF can be used to efficiently capture the state visitation measure for multiple controllers.
arXiv Detail & Related papers (2023-06-09T18:40:55Z) - Recognition and Prediction of Surgical Gestures and Trajectories Using
Transformer Models in Robot-Assisted Surgery [10.719885390990433]
Transformer models were first developed for Natural Language Processing (NLP) to model word sequences.
We propose the novel use of a Transformer model for three tasks: gesture recognition, gesture prediction, and trajectory prediction during RAS.
arXiv Detail & Related papers (2022-12-03T20:26:48Z) - Active Uncertainty Learning for Human-Robot Interaction: An Implicit
Dual Control Approach [5.05828899601167]
We present an algorithmic approach to enable uncertainty learning for human-in-the-loop motion planning based on the implicit dual control paradigm.
Our approach relies on sampling-based approximation of dynamic programming model predictive control problem.
The resulting policy is shown to preserve the dual control effect for generic human predictive models with both continuous and categorical uncertainty.
arXiv Detail & Related papers (2022-02-15T20:40:06Z) - Probabilistic Human Motion Prediction via A Bayesian Neural Network [71.16277790708529]
We propose a probabilistic model for human motion prediction in this paper.
Our model could generate several future motions when given an observed motion sequence.
We extensively validate our approach on a large scale benchmark dataset Human3.6m.
arXiv Detail & Related papers (2021-07-14T09:05:33Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z) - Risk-Sensitive Sequential Action Control with Multi-Modal Human
Trajectory Forecasting for Safe Crowd-Robot Interaction [55.569050872780224]
We present an online framework for safe crowd-robot interaction based on risk-sensitive optimal control, wherein the risk is modeled by the entropic risk measure.
Our modular approach decouples the crowd-robot interaction into learning-based prediction and model-based control.
A simulation study and a real-world experiment show that the proposed framework can accomplish safe and efficient navigation while avoiding collisions with more than 50 humans in the scene.
arXiv Detail & Related papers (2020-09-12T02:02:52Z) - Multimodal Deep Generative Models for Trajectory Prediction: A
Conditional Variational Autoencoder Approach [34.70843462687529]
We provide a self-contained tutorial on a conditional variational autoencoder approach to human behavior prediction.
The goals of this tutorial paper are to review and build a taxonomy of state-of-the-art methods in human behavior prediction.
arXiv Detail & Related papers (2020-08-10T03:18:27Z) - Learning Predictive Models From Observation and Interaction [137.77887825854768]
Learning predictive models from interaction with the world allows an agent, such as a robot, to learn about how the world works.
However, learning a model that captures the dynamics of complex skills represents a major challenge.
We propose a method to augment the training set with observational data of other agents, such as humans.
arXiv Detail & Related papers (2019-12-30T01:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.