Pedipulate: Enabling Manipulation Skills using a Quadruped Robot's Leg
- URL: http://arxiv.org/abs/2402.10837v1
- Date: Fri, 16 Feb 2024 17:20:45 GMT
- Title: Pedipulate: Enabling Manipulation Skills using a Quadruped Robot's Leg
- Authors: Philip Arm, Mayank Mittal, Hendrik Kolvenbach, Marco Hutter
- Abstract summary: Legged robots have the potential to become vital in maintenance, home support, and exploration scenarios.
In this work, we explore pedipulation - using the legs of a legged robot for manipulation.
- Score: 11.129918951736052
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Legged robots have the potential to become vital in maintenance, home
support, and exploration scenarios. In order to interact with and manipulate
their environments, most legged robots are equipped with a dedicated robot arm,
which means additional mass and mechanical complexity compared to standard
legged robots. In this work, we explore pedipulation - using the legs of a
legged robot for manipulation. By training a reinforcement learning policy that
tracks position targets for one foot, we enable a dedicated pedipulation
controller that is robust to disturbances, has a large workspace through
whole-body behaviors, and can reach far-away targets with gait emergence,
enabling loco-pedipulation. By deploying our controller on a quadrupedal robot
using teleoperation, we demonstrate various real-world tasks such as door
opening, sample collection, and pushing obstacles. We demonstrate load carrying
of more than 2.0 kg at the foot. Additionally, the controller is robust to
interaction forces at the foot, disturbances at the base, and slippery contact
surfaces. Videos of the experiments are available at
https://sites.google.com/leggedrobotics.com/pedipulate.
Related papers
- Built Different: Tactile Perception to Overcome Cross-Embodiment Capability Differences in Collaborative Manipulation [1.9048510647598207]
Tactile sensing is a powerful means of implicit communication between a human and a robot assistant.
In this paper, we investigate how tactile sensing can transcend cross-embodiment differences across robotic systems.
We show how our method can enable a cooperative task where a robot and human must work together to maneuver objects through space.
arXiv Detail & Related papers (2024-09-23T10:45:41Z) - Unifying 3D Representation and Control of Diverse Robots with a Single Camera [48.279199537720714]
We introduce Neural Jacobian Fields, an architecture that autonomously learns to model and control robots from vision alone.
Our approach achieves accurate closed-loop control and recovers the causal dynamic structure of each robot.
arXiv Detail & Related papers (2024-07-11T17:55:49Z) - Learning Visual Quadrupedal Loco-Manipulation from Demonstrations [36.1894630015056]
We aim to empower a quadruped robot to execute real-world manipulation tasks using only its legs.
We decompose the loco-manipulation process into a low-level reinforcement learning (RL)-based controller and a high-level Behavior Cloning (BC)-based planner.
Our approach is validated through simulations and real-world experiments, demonstrating the robot's ability to perform tasks that demand mobility and high precision.
arXiv Detail & Related papers (2024-03-29T17:59:05Z) - Seeing-Eye Quadruped Navigation with Force Responsive Locomotion Control [2.832383052276894]
Seeing-eye robots are useful tools for guiding visually impaired people, potentially producing a huge societal impact.
None considered external tugs from humans, which frequently occur in a real guide dog setting.
We demonstrate our full seeing-eye robot system on a real quadruped robot with a blindfolded human.
arXiv Detail & Related papers (2023-09-08T15:02:46Z) - Giving Robots a Hand: Learning Generalizable Manipulation with
Eye-in-Hand Human Video Demonstrations [66.47064743686953]
Eye-in-hand cameras have shown promise in enabling greater sample efficiency and generalization in vision-based robotic manipulation.
Videos of humans performing tasks, on the other hand, are much cheaper to collect since they eliminate the need for expertise in robotic teleoperation.
In this work, we augment narrow robotic imitation datasets with broad unlabeled human video demonstrations to greatly enhance the generalization of eye-in-hand visuomotor policies.
arXiv Detail & Related papers (2023-07-12T07:04:53Z) - Barkour: Benchmarking Animal-level Agility with Quadruped Robots [70.97471756305463]
We introduce the Barkour benchmark, an obstacle course to quantify agility for legged robots.
Inspired by dog agility competitions, it consists of diverse obstacles and a time based scoring mechanism.
We present two methods for tackling the benchmark.
arXiv Detail & Related papers (2023-05-24T02:49:43Z) - GenLoco: Generalized Locomotion Controllers for Quadrupedal Robots [87.32145104894754]
We introduce a framework for training generalized locomotion (GenLoco) controllers for quadrupedal robots.
Our framework synthesizes general-purpose locomotion controllers that can be deployed on a large variety of quadrupedal robots.
We show that our models acquire more general control strategies that can be directly transferred to novel simulated and real-world robots.
arXiv Detail & Related papers (2022-09-12T15:14:32Z) - Robots with Different Embodiments Can Express and Influence Carefulness
in Object Manipulation [104.5440430194206]
This work investigates the perception of object manipulations performed with a communicative intent by two robots.
We designed the robots' movements to communicate carefulness or not during the transportation of objects.
arXiv Detail & Related papers (2022-08-03T13:26:52Z) - A Transferable Legged Mobile Manipulation Framework Based on Disturbance
Predictive Control [15.044159090957292]
Legged mobile manipulation, where a quadruped robot is equipped with a robotic arm, can greatly enhance the performance of the robot.
We propose a unified framework disturbance predictive control where a reinforcement learning scheme with a latent dynamic adapter is embedded into our proposed low-level controller.
arXiv Detail & Related papers (2022-03-02T14:54:10Z) - Know Thyself: Transferable Visuomotor Control Through Robot-Awareness [22.405839096833937]
Training visuomotor robot controllers from scratch on a new robot typically requires generating large amounts of robot-specific data.
We propose a "robot-aware" solution paradigm that exploits readily available robot "self-knowledge"
Our experiments on tabletop manipulation tasks in simulation and on real robots demonstrate that these plug-in improvements dramatically boost the transferability of visuomotor controllers.
arXiv Detail & Related papers (2021-07-19T17:56:04Z) - OpenBot: Turning Smartphones into Robots [95.94432031144716]
Current robots are either expensive or make significant compromises on sensory richness, computational power, and communication capabilities.
We propose to leverage smartphones to equip robots with extensive sensor suites, powerful computational abilities, state-of-the-art communication channels, and access to a thriving software ecosystem.
We design a small electric vehicle that costs $50 and serves as a robot body for standard Android smartphones.
arXiv Detail & Related papers (2020-08-24T18:04:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.