Go Fetch: Mobile Manipulation in Unstructured Environments
- URL: http://arxiv.org/abs/2004.00899v1
- Date: Thu, 2 Apr 2020 09:33:59 GMT
- Title: Go Fetch: Mobile Manipulation in Unstructured Environments
- Authors: Kenneth Blomqvist, Michel Breyer, Andrei Cramariuc, Julian F\"orster,
Margarita Grinvald, Florian Tschopp, Jen Jen Chung, Lionel Ott, Juan Nieto,
Roland Siegwart
- Abstract summary: This work presents a mobile manipulation system that combines perception, localization, navigation, motion planning and grasping skills into one common workflow for fetch and carry applications in unstructured indoor environments.
The tight integration across the various modules is experimentally demonstrated on the task of finding a commonly available object in an office environment, grasping it, and delivering it to a desired drop-off location.
- Score: 34.923481785048146
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With humankind facing new and increasingly large-scale challenges in the
medical and domestic spheres, automation of the service sector carries a
tremendous potential for improved efficiency, quality, and safety of
operations. Mobile robotics can offer solutions with a high degree of mobility
and dexterity, however these complex systems require a multitude of
heterogeneous components to be carefully integrated into one consistent
framework. This work presents a mobile manipulation system that combines
perception, localization, navigation, motion planning and grasping skills into
one common workflow for fetch and carry applications in unstructured indoor
environments. The tight integration across the various modules is
experimentally demonstrated on the task of finding a commonly available object
in an office environment, grasping it, and delivering it to a desired drop-off
location. The accompanying video is available at https://youtu.be/e89_Xg1sLnY.
Related papers
- Commonsense Reasoning for Legged Robot Adaptation with Vision-Language Models [81.55156507635286]
Legged robots are physically capable of navigating a diverse variety of environments and overcoming a wide range of obstructions.
Current learning methods often struggle with generalization to the long tail of unexpected situations without heavy human supervision.
We propose a system, VLM-Predictive Control (VLM-PC), combining two key components that we find to be crucial for eliciting on-the-fly, adaptive behavior selection.
arXiv Detail & Related papers (2024-07-02T21:00:30Z) - SPIN: Simultaneous Perception, Interaction and Navigation [33.408010508592824]
We present a reactive mobile manipulation framework that uses an active visual system to consciously perceive and react to its environment.
Similar to how humans leverage whole-body and hand-eye coordination, we develop a mobile manipulator that exploits its ability to move and see.
arXiv Detail & Related papers (2024-05-13T17:59:36Z) - Learning Robust Autonomous Navigation and Locomotion for Wheeled-Legged Robots [50.02055068660255]
Navigating urban environments poses unique challenges for robots, necessitating innovative solutions for locomotion and navigation.
This work introduces a fully integrated system comprising adaptive locomotion control, mobility-aware local navigation planning, and large-scale path planning within the city.
Using model-free reinforcement learning (RL) techniques and privileged learning, we develop a versatile locomotion controller.
Our controllers are integrated into a large-scale urban navigation system and validated by autonomous, kilometer-scale navigation missions conducted in Zurich, Switzerland, and Seville, Spain.
arXiv Detail & Related papers (2024-05-03T00:29:20Z) - LEGENT: Open Platform for Embodied Agents [60.71847900126832]
We introduce LEGENT, an open, scalable platform for developing embodied agents using Large Language Models (LLMs) and Large Multimodal Models (LMMs)
LEGENT offers a rich, interactive 3D environment with communicable and actionable agents, paired with a user-friendly interface.
In experiments, an embryonic vision-language-action model trained on LEGENT-generated data surpasses GPT-4V in embodied tasks.
arXiv Detail & Related papers (2024-04-28T16:50:12Z) - FluidLab: A Differentiable Environment for Benchmarking Complex Fluid
Manipulation [80.63838153351804]
We introduce FluidLab, a simulation environment with a diverse set of manipulation tasks involving complex fluid dynamics.
At the heart of our platform is a fully differentiable physics simulator, providing GPU-accelerated simulations and gradient calculations.
We propose several domain-specific optimization schemes coupled with differentiable physics.
arXiv Detail & Related papers (2023-03-04T07:24:22Z) - Orbit: A Unified Simulation Framework for Interactive Robot Learning
Environments [38.23943905182543]
We present Orbit, a unified and modular framework for robot learning powered by NVIDIA Isaac Sim.
It offers a modular design to create robotic environments with photo-realistic scenes and high-fidelity rigid and deformable body simulation.
We aim to support various research areas, including representation learning, reinforcement learning, imitation learning, and task and motion planning.
arXiv Detail & Related papers (2023-01-10T20:19:17Z) - Physical Interaction and Manipulation of the Environment using Aerial
Robots [1.370633147306388]
The physical interaction of aerial robots with their environment has countless potential applications and is an emerging area with many open challenges.
fully-actuated multirotors have been introduced to tackle some of these challenges.
They provide complete control over position and orientation and eliminate the need for attaching a multi-DoF manipulation arm to the robot.
However, there are many open problems before they can be used in real-world applications.
arXiv Detail & Related papers (2022-07-06T13:15:10Z) - N$^2$M$^2$: Learning Navigation for Arbitrary Mobile Manipulation
Motions in Unseen and Dynamic Environments [9.079709086741987]
We introduce Neural Navigation for Mobile Manipulation (N$2$M$2$) which extends this decomposition to complex obstacle environments.
The resulting approach can perform unseen, long-horizon tasks in unexplored environments while instantly reacting to dynamic obstacles and environmental changes.
We demonstrate the capabilities of our proposed approach in extensive simulation and real-world experiments on multiple kinematically diverse mobile manipulators.
arXiv Detail & Related papers (2022-06-17T12:52:41Z) - Simultaneous Navigation and Construction Benchmarking Environments [73.0706832393065]
We need intelligent robots for mobile construction, the process of navigating in an environment and modifying its structure according to a geometric design.
In this task, a major robot vision and learning challenge is how to exactly achieve the design without GPS.
We benchmark the performance of a handcrafted policy with basic localization and planning, and state-of-the-art deep reinforcement learning methods.
arXiv Detail & Related papers (2021-03-31T00:05:54Z) - Articulated Object Interaction in Unknown Scenes with Whole-Body Mobile
Manipulation [16.79185733369416]
We propose a two-stage architecture for autonomous interaction with large articulated objects in unknown environments.
The first stage uses a learned model to estimate the articulated model of a target object from an RGB-D input and predicts an action-conditional sequence of states for interaction.
The second stage comprises of a whole-body motion controller to manipulate the object along the generated kinematic plan.
arXiv Detail & Related papers (2021-03-18T21:32:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.