Grasp and Motion Planning for Dexterous Manipulation for the Real Robot
Challenge
- URL: http://arxiv.org/abs/2101.02842v1
- Date: Fri, 8 Jan 2021 04:13:39 GMT
- Title: Grasp and Motion Planning for Dexterous Manipulation for the Real Robot
Challenge
- Authors: Takuma Yoneda, Charles Schaff, Takahiro Maeda, Matthew Walter
- Abstract summary: The Real Robot Challenge is a three-phase dexterous manipulation competition.
Our approach combines motion planning with several motion primitives to manipulate the object.
We were anonymously known as ardentstork' on the competition leaderboard.
- Score: 0.05735035463793007
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This report describes our winning submission to the Real Robot Challenge
(https://real-robot-challenge.com/). The Real Robot Challenge is a three-phase
dexterous manipulation competition that involves manipulating various
rectangular objects with the TriFinger Platform. Our approach combines motion
planning with several motion primitives to manipulate the object. For Phases 1
and 2, we additionally learn a residual policy in simulation that applies
corrective actions on top of our controller. Our approach won first place in
Phase 2 and Phase 3 of the competition. We were anonymously known as
`ardentstork' on the competition leaderboard
(https://real-robot-challenge.com/leader-board). Videos and our code can be
found at https://github.com/ripl-ttic/real-robot-challenge.
Related papers
- Track2Act: Predicting Point Tracks from Internet Videos enables Generalizable Robot Manipulation [65.46610405509338]
We seek to learn a generalizable goal-conditioned policy that enables zero-shot robot manipulation.
Our framework,Track2Act predicts tracks of how points in an image should move in future time-steps based on a goal.
We show that this approach of combining scalably learned track prediction with a residual policy enables diverse generalizable robot manipulation.
arXiv Detail & Related papers (2024-05-02T17:56:55Z) - Learning Visual Quadrupedal Loco-Manipulation from Demonstrations [36.1894630015056]
We aim to empower a quadruped robot to execute real-world manipulation tasks using only its legs.
We decompose the loco-manipulation process into a low-level reinforcement learning (RL)-based controller and a high-level Behavior Cloning (BC)-based planner.
Our approach is validated through simulations and real-world experiments, demonstrating the robot's ability to perform tasks that demand mobility and high precision.
arXiv Detail & Related papers (2024-03-29T17:59:05Z) - HumanoidBench: Simulated Humanoid Benchmark for Whole-Body Locomotion and Manipulation [50.616995671367704]
We present a high-dimensional, simulated robot learning benchmark, HumanoidBench, featuring a humanoid robot equipped with dexterous hands.
Our findings reveal that state-of-the-art reinforcement learning algorithms struggle with most tasks, whereas a hierarchical learning approach achieves superior performance when supported by robust low-level policies.
arXiv Detail & Related papers (2024-03-15T17:45:44Z) - Learning an Actionable Discrete Diffusion Policy via Large-Scale Actionless Video Pre-Training [69.54948297520612]
Learning a generalist embodied agent poses challenges, primarily stemming from the scarcity of action-labeled robotic datasets.
We introduce a novel framework to tackle these challenges, which leverages a unified discrete diffusion to combine generative pre-training on human videos and policy fine-tuning on a small number of action-labeled robot videos.
Our method generates high-fidelity future videos for planning and enhances the fine-tuned policies compared to previous state-of-the-art approaches.
arXiv Detail & Related papers (2024-02-22T09:48:47Z) - Pedipulate: Enabling Manipulation Skills using a Quadruped Robot's Leg [11.129918951736052]
Legged robots have the potential to become vital in maintenance, home support, and exploration scenarios.
In this work, we explore pedipulation - using the legs of a legged robot for manipulation.
arXiv Detail & Related papers (2024-02-16T17:20:45Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Learning Video-Conditioned Policies for Unseen Manipulation Tasks [83.2240629060453]
Video-conditioned Policy learning maps human demonstrations of previously unseen tasks to robot manipulation skills.
We learn our policy to generate appropriate actions given current scene observations and a video of the target task.
We validate our approach on a set of challenging multi-task robot manipulation environments and outperform state of the art.
arXiv Detail & Related papers (2023-05-10T16:25:42Z) - GenLoco: Generalized Locomotion Controllers for Quadrupedal Robots [87.32145104894754]
We introduce a framework for training generalized locomotion (GenLoco) controllers for quadrupedal robots.
Our framework synthesizes general-purpose locomotion controllers that can be deployed on a large variety of quadrupedal robots.
We show that our models acquire more general control strategies that can be directly transferred to novel simulated and real-world robots.
arXiv Detail & Related papers (2022-09-12T15:14:32Z) - Real Robot Challenge using Deep Reinforcement Learning [6.332038240397164]
This paper details our winning submission to Phase 1 of the 2021 Real Robot Challenge.
The challenge is in which a three fingered robot must carry a cube along specified goal trajectories.
We use a pure reinforcement learning approach which requires minimal expert knowledge of the robotic system.
arXiv Detail & Related papers (2021-09-30T16:12:17Z) - Know Thyself: Transferable Visuomotor Control Through Robot-Awareness [22.405839096833937]
Training visuomotor robot controllers from scratch on a new robot typically requires generating large amounts of robot-specific data.
We propose a "robot-aware" solution paradigm that exploits readily available robot "self-knowledge"
Our experiments on tabletop manipulation tasks in simulation and on real robots demonstrate that these plug-in improvements dramatically boost the transferability of visuomotor controllers.
arXiv Detail & Related papers (2021-07-19T17:56:04Z) - Learning Locomotion Skills in Evolvable Robots [10.167123492952694]
We introduce a controller architecture and a generic learning method to allow a modular robot with an arbitrary shape to learn to walk towards a target and follow this target if it moves.
Our approach is validated on three robots, a spider, a gecko, and their offspring, in three real-world scenarios.
arXiv Detail & Related papers (2020-10-19T14:01:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.