Learning Riemannian Manifolds for Geodesic Motion Skills
- URL: http://arxiv.org/abs/2106.04315v1
- Date: Tue, 8 Jun 2021 13:24:54 GMT
- Title: Learning Riemannian Manifolds for Geodesic Motion Skills
- Authors: Hadi Beik-Mohammadi, S{\o}ren Hauberg, Georgios Arvanitidis, Gerhard
Neumann and Leonel Rozo
- Abstract summary: We develop a learning framework that allows robots to learn new skills and adapt them to unseen situations.
We show how geodesic motion skills let a robot plan movements from and to arbitrary points on a data manifold.
We test our learning framework using a 7-DoF robotic manipulator, where the robot satisfactorily learns and reproduces realistic skills featuring elaborated motion patterns.
- Score: 19.305285090233063
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For robots to work alongside humans and perform in unstructured environments,
they must learn new motion skills and adapt them to unseen situations on the
fly. This demands learning models that capture relevant motion patterns, while
offering enough flexibility to adapt the encoded skills to new requirements,
such as dynamic obstacle avoidance. We introduce a Riemannian manifold
perspective on this problem, and propose to learn a Riemannian manifold from
human demonstrations on which geodesics are natural motion skills. We realize
this with a variational autoencoder (VAE) over the space of position and
orientations of the robot end-effector. Geodesic motion skills let a robot plan
movements from and to arbitrary points on the data manifold. They also provide
a straightforward method to avoid obstacles by redefining the ambient metric in
an online fashion. Moreover, geodesics naturally exploit the manifold resulting
from multiple--mode tasks to design motions that were not explicitly
demonstrated previously. We test our learning framework using a 7-DoF robotic
manipulator, where the robot satisfactorily learns and reproduces realistic
skills featuring elaborated motion patterns, avoids previously unseen
obstacles, and generates novel movements in multiple-mode settings.
Related papers
- Commonsense Reasoning for Legged Robot Adaptation with Vision-Language Models [81.55156507635286]
Legged robots are physically capable of navigating a diverse variety of environments and overcoming a wide range of obstructions.
Current learning methods often struggle with generalization to the long tail of unexpected situations without heavy human supervision.
We propose a system, VLM-Predictive Control (VLM-PC), combining two key components that we find to be crucial for eliciting on-the-fly, adaptive behavior selection.
arXiv Detail & Related papers (2024-07-02T21:00:30Z) - Guided Decoding for Robot On-line Motion Generation and Adaption [44.959409835754634]
We present a novel motion generation approach for robot arms, with high degrees of freedom, in complex settings that can adapt online to obstacles or new via points.
We train a transformer architecture, based on conditional variational autoencoder, on a large dataset of simulated trajectories used as demonstrations.
We show that our model successfully generates motion from different initial and target points and that is capable of generating trajectories that navigate complex tasks across different robotic platforms.
arXiv Detail & Related papers (2024-03-22T14:32:27Z) - InsActor: Instruction-driven Physics-based Characters [65.4702927454252]
In this paper, we present a principled generative framework that produces instruction-driven animations of physics-based characters.
Our framework empowers InsActor to capture complex relationships between high-level human instructions and character motions.
InsActor achieves state-of-the-art results on various tasks, including instruction-driven motion generation and instruction-driven waypoint heading.
arXiv Detail & Related papers (2023-12-28T17:10:31Z) - Universal Humanoid Motion Representations for Physics-Based Control [71.46142106079292]
We present a universal motion representation that encompasses a comprehensive range of motor skills for physics-based humanoid control.
We first learn a motion imitator that can imitate all of human motion from a large, unstructured motion dataset.
We then create our motion representation by distilling skills directly from the imitator.
arXiv Detail & Related papers (2023-10-06T20:48:43Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - Learning Riemannian Stable Dynamical Systems via Diffeomorphisms [0.23204178451683263]
Dexterous and autonomous robots should be capable of executing elaborated dynamical motions skillfully.
Learning techniques may be leveraged to build models of such dynamic skills.
To accomplish this, the learning model needs to encode a stable vector field that resembles the desired motion dynamics.
arXiv Detail & Related papers (2022-11-06T16:28:45Z) - Automatic Acquisition of a Repertoire of Diverse Grasping Trajectories
through Behavior Shaping and Novelty Search [0.0]
We introduce an approach to generate diverse grasping movements in order to solve this problem.
The movements are generated in simulation, for particular object positions.
Although we show that generated movements actually work on a real Baxter robot, the aim is to use this method to create a large dataset to bootstrap deep learning methods.
arXiv Detail & Related papers (2022-05-17T09:17:31Z) - Reactive Motion Generation on Learned Riemannian Manifolds [14.325005233326497]
We show how to generate motion skills based on complicated motion patterns demonstrated by a human operator.
We propose a technique for facilitating on-the-fly end-effector/multiple-limb obstacle avoidance by reshaping the learned manifold.
We extensively tested our approach in task space and joint space scenarios using a 7-DoF robotic manipulator.
arXiv Detail & Related papers (2022-03-15T10:28:16Z) - A Differentiable Recipe for Learning Visual Non-Prehensile Planar
Manipulation [63.1610540170754]
We focus on the problem of visual non-prehensile planar manipulation.
We propose a novel architecture that combines video decoding neural models with priors from contact mechanics.
We find that our modular and fully differentiable architecture performs better than learning-only methods on unseen objects and motions.
arXiv Detail & Related papers (2021-11-09T18:39:45Z) - Learning to Shift Attention for Motion Generation [55.61994201686024]
One challenge of motion generation using robot learning from demonstration techniques is that human demonstrations follow a distribution with multiple modes for one task query.
Previous approaches fail to capture all modes or tend to average modes of the demonstrations and thus generate invalid trajectories.
We propose a motion generation model with extrapolation ability to overcome this problem.
arXiv Detail & Related papers (2021-02-24T09:07:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.