Reactive Motion Generation on Learned Riemannian Manifolds
- URL: http://arxiv.org/abs/2203.07761v2
- Date: Thu, 17 Aug 2023 16:05:39 GMT
- Title: Reactive Motion Generation on Learned Riemannian Manifolds
- Authors: Hadi Beik-Mohammadi, S{\o}ren Hauberg, Georgios Arvanitidis, Gerhard
Neumann, Leonel Rozo
- Abstract summary: We show how to generate motion skills based on complicated motion patterns demonstrated by a human operator.
We propose a technique for facilitating on-the-fly end-effector/multiple-limb obstacle avoidance by reshaping the learned manifold.
We extensively tested our approach in task space and joint space scenarios using a 7-DoF robotic manipulator.
- Score: 14.325005233326497
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent decades, advancements in motion learning have enabled robots to
acquire new skills and adapt to unseen conditions in both structured and
unstructured environments. In practice, motion learning methods capture
relevant patterns and adjust them to new conditions such as dynamic obstacle
avoidance or variable targets. In this paper, we investigate the robot motion
learning paradigm from a Riemannian manifold perspective. We argue that
Riemannian manifolds may be learned via human demonstrations in which geodesics
are natural motion skills. The geodesics are generated using a learned
Riemannian metric produced by our novel variational autoencoder (VAE), which is
especially intended to recover full-pose end-effector states and joint space
configurations. In addition, we propose a technique for facilitating on-the-fly
end-effector/multiple-limb obstacle avoidance by reshaping the learned manifold
using an obstacle-aware ambient metric. The motion generated using these
geodesics may naturally result in multiple-solution tasks that have not been
explicitly demonstrated previously. We extensively tested our approach in task
space and joint space scenarios using a 7-DoF robotic manipulator. We
demonstrate that our method is capable of learning and generating motion skills
based on complicated motion patterns demonstrated by a human operator.
Additionally, we assess several obstacle avoidance strategies and generate
trajectories in multiple-mode settings.
Related papers
- Guided Decoding for Robot On-line Motion Generation and Adaption [44.959409835754634]
We present a novel motion generation approach for robot arms, with high degrees of freedom, in complex settings that can adapt online to obstacles or new via points.
We train a transformer architecture, based on conditional variational autoencoder, on a large dataset of simulated trajectories used as demonstrations.
We show that our model successfully generates motion from different initial and target points and that is capable of generating trajectories that navigate complex tasks across different robotic platforms.
arXiv Detail & Related papers (2024-03-22T14:32:27Z) - FLD: Fourier Latent Dynamics for Structured Motion Representation and
Learning [19.491968038335944]
We introduce a self-supervised, structured representation and generation method that extracts spatial-temporal relationships in periodic or quasi-periodic motions.
Our work opens new possibilities for future advancements in general motion representation and learning algorithms.
arXiv Detail & Related papers (2024-02-21T13:59:21Z) - Learning Riemannian Stable Dynamical Systems via Diffeomorphisms [0.23204178451683263]
Dexterous and autonomous robots should be capable of executing elaborated dynamical motions skillfully.
Learning techniques may be leveraged to build models of such dynamic skills.
To accomplish this, the learning model needs to encode a stable vector field that resembles the desired motion dynamics.
arXiv Detail & Related papers (2022-11-06T16:28:45Z) - MoDi: Unconditional Motion Synthesis from Diverse Data [51.676055380546494]
We present MoDi, an unconditional generative model that synthesizes diverse motions.
Our model is trained in a completely unsupervised setting from a diverse, unstructured and unlabeled motion dataset.
We show that despite the lack of any structure in the dataset, the latent space can be semantically clustered.
arXiv Detail & Related papers (2022-06-16T09:06:25Z) - Next Steps: Learning a Disentangled Gait Representation for Versatile
Quadruped Locomotion [69.87112582900363]
Current planners are unable to vary key gait parameters continuously while the robot is in motion.
In this work we address this limitation by learning a latent space capturing the key stance phases constituting a particular gait.
We demonstrate that specific properties of the drive signal map directly to gait parameters such as cadence, foot step height and full stance duration.
arXiv Detail & Related papers (2021-12-09T10:02:02Z) - Nonprehensile Riemannian Motion Predictive Control [57.295751294224765]
We introduce a novel Real-to-Sim reward analysis technique to reliably imagine and predict the outcome of taking possible actions for a real robotic platform.
We produce a closed-loop controller to reactively push objects in a continuous action space.
We observe that RMPC is robust in cluttered as well as occluded environments and outperforms the baselines.
arXiv Detail & Related papers (2021-11-15T18:50:04Z) - Learning Riemannian Manifolds for Geodesic Motion Skills [19.305285090233063]
We develop a learning framework that allows robots to learn new skills and adapt them to unseen situations.
We show how geodesic motion skills let a robot plan movements from and to arbitrary points on a data manifold.
We test our learning framework using a 7-DoF robotic manipulator, where the robot satisfactorily learns and reproduces realistic skills featuring elaborated motion patterns.
arXiv Detail & Related papers (2021-06-08T13:24:54Z) - Learning to Shift Attention for Motion Generation [55.61994201686024]
One challenge of motion generation using robot learning from demonstration techniques is that human demonstrations follow a distribution with multiple modes for one task query.
Previous approaches fail to capture all modes or tend to average modes of the demonstrations and thus generate invalid trajectories.
We propose a motion generation model with extrapolation ability to overcome this problem.
arXiv Detail & Related papers (2021-02-24T09:07:52Z) - ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for
Mobile Manipulation [99.2543521972137]
ReLMoGen is a framework that combines a learned policy to predict subgoals and a motion generator to plan and execute the motion needed to reach these subgoals.
Our method is benchmarked on a diverse set of seven robotics tasks in photo-realistic simulation environments.
ReLMoGen shows outstanding transferability between different motion generators at test time, indicating a great potential to transfer to real robots.
arXiv Detail & Related papers (2020-08-18T08:05:15Z) - Learning to Move with Affordance Maps [57.198806691838364]
The ability to autonomously explore and navigate a physical space is a fundamental requirement for virtually any mobile autonomous agent.
Traditional SLAM-based approaches for exploration and navigation largely focus on leveraging scene geometry.
We show that learned affordance maps can be used to augment traditional approaches for both exploration and navigation, providing significant improvements in performance.
arXiv Detail & Related papers (2020-01-08T04:05:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.