Neural Dynamic Movement Primitives -- a survey
- URL: http://arxiv.org/abs/2208.01903v1
- Date: Wed, 3 Aug 2022 08:11:08 GMT
- Title: Neural Dynamic Movement Primitives -- a survey
- Authors: Jo\v{z}e M Ro\v{z}anec, Bojan Nemec
- Abstract summary: The ability to provide such motion control is closely related to how such movements are encoded.
Deep learning has had a strong repercussion in the development of novel approaches for Dynamic Movement Primitives.
- Score: 3.644868888022173
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the most important challenges in robotics is producing accurate
trajectories and controlling their dynamic parameters so that the robots can
perform different tasks. The ability to provide such motion control is closely
related to how such movements are encoded. Advances on deep learning have had a
strong repercussion in the development of novel approaches for Dynamic Movement
Primitives. In this work, we survey scientific literature related to Neural
Dynamic Movement Primitives, to complement existing surveys on Dynamic Movement
Primitives.
Related papers
- Deep Learning for Koopman-based Dynamic Movement Primitives [0.0]
We propose a novel approach by joining the theories of Koopman Operators and Dynamic Movement Primitives to Learning from Demonstration.
Our approach, named glsadmd, projects nonlinear dynamical systems into linear latent spaces such that a solution reproduces the desired complex motion.
Our results are comparable to the Extended Dynamic Mode Decomposition on the LASA Handwriting dataset but with training on only a small fractions of the letters.
arXiv Detail & Related papers (2023-12-06T07:33:22Z) - Universal Humanoid Motion Representations for Physics-Based Control [71.46142106079292]
We present a universal motion representation that encompasses a comprehensive range of motor skills for physics-based humanoid control.
We first learn a motion imitator that can imitate all of human motion from a large, unstructured motion dataset.
We then create our motion representation by distilling skills directly from the imitator.
arXiv Detail & Related papers (2023-10-06T20:48:43Z) - DROP: Dynamics Responses from Human Motion Prior and Projective Dynamics [21.00283279991885]
We introduce DROP, a novel framework for modeling Dynamics Responses of humans using generative mOtion prior and Projective dynamics.
We conduct extensive evaluations of our model across different motion tasks and various physical perturbations, demonstrating the scalability and diversity of responses.
arXiv Detail & Related papers (2023-09-24T20:25:59Z) - Persistent-Transient Duality: A Multi-mechanism Approach for Modeling
Human-Object Interaction [58.67761673662716]
Humans are highly adaptable, swiftly switching between different modes to handle different tasks, situations and contexts.
In Human-object interaction (HOI) activities, these modes can be attributed to two mechanisms: (1) the large-scale consistent plan for the whole activity and (2) the small-scale children interactive actions that start and end along the timeline.
This work proposes to model two concurrent mechanisms that jointly control human motion.
arXiv Detail & Related papers (2023-07-24T12:21:33Z) - Deep Probabilistic Movement Primitives with a Bayesian Aggregator [4.796643369294991]
Movement primitives are trainable parametric models that reproduce robotic movements starting from a limited set of demonstrations.
This paper proposes a deep movement primitive architecture that encodes all the operations above and uses a Bayesian context aggregator.
arXiv Detail & Related papers (2023-07-11T09:34:15Z) - MotionTrack: Learning Motion Predictor for Multiple Object Tracking [68.68339102749358]
We introduce a novel motion-based tracker, MotionTrack, centered around a learnable motion predictor.
Our experimental results demonstrate that MotionTrack yields state-of-the-art performance on datasets such as Dancetrack and SportsMOT.
arXiv Detail & Related papers (2023-06-05T04:24:11Z) - Task-Oriented Human-Object Interactions Generation with Implicit Neural
Representations [61.659439423703155]
TOHO: Task-Oriented Human-Object Interactions Generation with Implicit Neural Representations.
Our method generates continuous motions that are parameterized only by the temporal coordinate.
This work takes a step further toward general human-scene interaction simulation.
arXiv Detail & Related papers (2023-03-23T09:31:56Z) - AMP: Adversarial Motion Priors for Stylized Physics-Based Character
Control [145.61135774698002]
We propose a fully automated approach to selecting motion for a character to track in a given scenario.
High-level task objectives that the character should perform can be specified by relatively simple reward functions.
Low-level style of the character's behaviors can be specified by a dataset of unstructured motion clips.
Our system produces high-quality motions comparable to those achieved by state-of-the-art tracking-based techniques.
arXiv Detail & Related papers (2021-04-05T22:43:14Z) - Neural Dynamic Policies for End-to-End Sensorimotor Learning [51.24542903398335]
The current dominant paradigm in sensorimotor control, whether imitation or reinforcement learning, is to train policies directly in raw action spaces.
We propose Neural Dynamic Policies (NDPs) that make predictions in trajectory distribution space.
NDPs outperform the prior state-of-the-art in terms of either efficiency or performance across several robotic control tasks.
arXiv Detail & Related papers (2020-12-04T18:59:32Z) - Action2Motion: Conditioned Generation of 3D Human Motions [28.031644518303075]
We aim to generateplausible human motion sequences in 3D.
Each sampled sequence faithfully resembles anaturalhuman bodyarticulation dynamics.
A new 3D human motion dataset, HumanAct12, is also constructed.
arXiv Detail & Related papers (2020-07-30T05:29:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.