Riemannian Flow Matching Policy for Robot Motion Learning
- URL: http://arxiv.org/abs/2403.10672v2
- Date: Tue, 27 Aug 2024 11:13:43 GMT
- Title: Riemannian Flow Matching Policy for Robot Motion Learning
- Authors: Max Braun, NoƩmie Jaquier, Leonel Rozo, Tamim Asfour,
- Abstract summary: We introduce a novel model for learning and synthesizing robot visuomotor policies.
We show that RFMP provides smoother action trajectories with significantly lower inference times.
- Score: 5.724027955589408
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce Riemannian Flow Matching Policies (RFMP), a novel model for learning and synthesizing robot visuomotor policies. RFMP leverages the efficient training and inference capabilities of flow matching methods. By design, RFMP inherits the strengths of flow matching: the ability to encode high-dimensional multimodal distributions, commonly encountered in robotic tasks, and a very simple and fast inference process. We demonstrate the applicability of RFMP to both state-based and vision-conditioned robot motion policies. Notably, as the robot state resides on a Riemannian manifold, RFMP inherently incorporates geometric awareness, which is crucial for realistic robotic tasks. To evaluate RFMP, we conduct two proof-of-concept experiments, comparing its performance against Diffusion Policies. Although both approaches successfully learn the considered tasks, our results show that RFMP provides smoother action trajectories with significantly lower inference times.
Related papers
- Simultaneous Multi-Robot Motion Planning with Projected Diffusion Models [57.45019514036948]
Simultaneous MRMP Diffusion (SMD) is a novel approach integrating constrained optimization into the diffusion sampling process to produce kinematically feasible trajectories.
The paper introduces a comprehensive MRMP benchmark to evaluate trajectory planning algorithms across scenarios with varying robot densities, obstacle complexities, and motion constraints.
arXiv Detail & Related papers (2025-02-05T20:51:28Z) - Fast and Robust Visuomotor Riemannian Flow Matching Policy [15.341017260123927]
Diffusion-based visuomotor policies excel at learning complex robotic tasks.
RFMP is a model that inherits the easy training and fast inference capabilities of flow matching.
arXiv Detail & Related papers (2024-12-14T15:03:33Z) - Prognostic Framework for Robotic Manipulators Operating Under Dynamic Task Severities [0.6058427379240697]
We present a prognostic modeling framework that predicts a robotic manipulator's Remaining Useful Life (RUL)
Our findings show that robots in both fleets experience shorter RUL when handling a higher proportion of high-severity tasks.
arXiv Detail & Related papers (2024-11-30T17:09:18Z) - PIVOT-R: Primitive-Driven Waypoint-Aware World Model for Robotic Manipulation [68.17081518640934]
We propose a PrIrmitive-driVen waypOinT-aware world model for Robotic manipulation (PIVOT-R)
PIVOT-R consists of a Waypoint-aware World Model (WAWM) and a lightweight action prediction module.
Our PIVOT-R outperforms state-of-the-art open-source models on the SeaWave benchmark, achieving an average relative improvement of 19.45% across four levels of instruction tasks.
arXiv Detail & Related papers (2024-10-14T11:30:18Z) - R-AIF: Solving Sparse-Reward Robotic Tasks from Pixels with Active Inference and World Models [50.19174067263255]
We introduce prior preference learning techniques and self-revision schedules to help the agent excel in sparse-reward, continuous action, goal-based robotic control POMDP environments.
We show that our agents offer improved performance over state-of-the-art models in terms of cumulative rewards, relative stability, and success rate.
arXiv Detail & Related papers (2024-09-21T18:32:44Z) - Affordance-based Robot Manipulation with Flow Matching [6.863932324631107]
We present a framework for assistive robot manipulation.
We tackle two challenges: first, efficiently adapting large-scale models to downstream scene affordance understanding tasks, and second, effectively learning robot action trajectories by grounding the visual affordance model.
We learn robot action trajectories guided by affordances in a supervised flow matching method.
arXiv Detail & Related papers (2024-09-02T09:11:28Z) - Robot Fleet Learning via Policy Merging [58.5086287737653]
We propose FLEET-MERGE to efficiently merge policies in the fleet setting.
We show that FLEET-MERGE consolidates the behavior of policies trained on 50 tasks in the Meta-World environment.
We introduce a novel robotic tool-use benchmark, FLEET-TOOLS, for fleet policy learning in compositional and contact-rich robot manipulation tasks.
arXiv Detail & Related papers (2023-10-02T17:23:51Z) - GAN-MPC: Training Model Predictive Controllers with Parameterized Cost
Functions using Demonstrations from Non-identical Experts [14.291720751625585]
We propose a generative adversarial network (GAN) to minimize the Jensen-Shannon divergence between the state-trajectory distributions of the demonstrator and the imitator.
We evaluate our approach on a variety of simulated robotics tasks of DeepMind Control suite.
arXiv Detail & Related papers (2023-05-30T15:15:30Z) - Nonprehensile Riemannian Motion Predictive Control [57.295751294224765]
We introduce a novel Real-to-Sim reward analysis technique to reliably imagine and predict the outcome of taking possible actions for a real robotic platform.
We produce a closed-loop controller to reactively push objects in a continuous action space.
We observe that RMPC is robust in cluttered as well as occluded environments and outperforms the baselines.
arXiv Detail & Related papers (2021-11-15T18:50:04Z) - RMP2: A Structured Composable Policy Class for Robot Learning [36.35483747142448]
We consider the problem of learning motion policies for acceleration-based robotics systems with a structured policy class specified by RMPflow.
RMPflow is a multi-task control framework that has been successfully applied in many robotics problems.
We re-examine the message passing algorithm of RMPflow called RMP2 and propose a more efficient algorithm to compute RMPflow policies.
arXiv Detail & Related papers (2021-03-10T08:28:38Z) - FedDANE: A Federated Newton-Type Method [49.9423212899788]
Federated learning aims to jointly learn low statistical models over massively distributed datasets.
We propose FedDANE, an optimization that we adapt from DANE, to handle federated learning.
arXiv Detail & Related papers (2020-01-07T07:44:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.