RoboEXP: Action-Conditioned Scene Graph via Interactive Exploration for
Robotic Manipulation
- URL: http://arxiv.org/abs/2402.15487v1
- Date: Fri, 23 Feb 2024 18:27:17 GMT
- Title: RoboEXP: Action-Conditioned Scene Graph via Interactive Exploration for
Robotic Manipulation
- Authors: Hanxiao Jiang, Binghao Huang, Ruihai Wu, Zhuoran Li, Shubham Garg,
Hooshang Nayyeri, Shenlong Wang, Yunzhu Li
- Abstract summary: We introduce the novel task of interactive scene exploration, wherein robots autonomously explore environments and produce an action-conditioned scene graph (ACSG)
The ACSG accounts for both low-level information, such as geometry and semantics, and high-level information, such as the action-conditioned relationships between different entities in the scene.
We apply our system across various real-world settings in a zero-shot manner, demonstrating its effectiveness in exploring and modeling environments it has never seen before.
- Score: 22.30830950219317
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robots need to explore their surroundings to adapt to and tackle tasks in
unknown environments. Prior work has proposed building scene graphs of the
environment but typically assumes that the environment is static, omitting
regions that require active interactions. This severely limits their ability to
handle more complex tasks in household and office environments: before setting
up a table, robots must explore drawers and cabinets to locate all utensils and
condiments. In this work, we introduce the novel task of interactive scene
exploration, wherein robots autonomously explore environments and produce an
action-conditioned scene graph (ACSG) that captures the structure of the
underlying environment. The ACSG accounts for both low-level information, such
as geometry and semantics, and high-level information, such as the
action-conditioned relationships between different entities in the scene. To
this end, we present the Robotic Exploration (RoboEXP) system, which
incorporates the Large Multimodal Model (LMM) and an explicit memory design to
enhance our system's capabilities. The robot reasons about what and how to
explore an object, accumulating new information through the interaction process
and incrementally constructing the ACSG. We apply our system across various
real-world settings in a zero-shot manner, demonstrating its effectiveness in
exploring and modeling environments it has never seen before. Leveraging the
constructed ACSG, we illustrate the effectiveness and efficiency of our RoboEXP
system in facilitating a wide range of real-world manipulation tasks involving
rigid, articulated objects, nested objects like Matryoshka dolls, and
deformable objects like cloth.
Related papers
- CuriousBot: Interactive Mobile Exploration via Actionable 3D Relational Object Graph [12.54884302440877]
Mobile exploration is a longstanding challenge in robotics.
Existing robotic exploration approaches via active interaction are often restricted to tabletop scenes.
We introduce a 3D relational object graph that encodes diverse object relations and enables exploration through active interaction.
arXiv Detail & Related papers (2025-01-23T02:39:04Z) - One to rule them all: natural language to bind communication, perception and action [0.9302364070735682]
This paper presents an advanced architecture for robotic action planning that integrates communication, perception, and planning with Large Language Models (LLMs)
The Planner Module is the core of the system where LLMs embedded in a modified ReAct framework are employed to interpret and carry out user commands.
The modified ReAct framework further enhances the execution space by providing real-time environmental perception and the outcomes of physical actions.
arXiv Detail & Related papers (2024-11-22T16:05:54Z) - Time is on my sight: scene graph filtering for dynamic environment perception in an LLM-driven robot [0.8515309662618664]
This paper presents a robot control architecture that addresses key challenges in human-robot interaction.
The architecture uses Large Language Models to integrate diverse information sources, including natural language commands.
The architecture enhances adaptability, task efficiency, and human-robot collaboration in dynamic environments.
arXiv Detail & Related papers (2024-11-22T15:58:26Z) - Flex: End-to-End Text-Instructed Visual Navigation from Foundation Model Features [59.892436892964376]
We investigate the minimal data requirements and architectural adaptations necessary to achieve robust closed-loop performance with vision-based control policies.<n>Our findings are synthesized in Flex (Fly lexically), a framework that uses pre-trained Vision Language Models (VLMs) as frozen patch-wise feature extractors.<n>We demonstrate the effectiveness of this approach on a quadrotor fly-to-target task, where agents trained via behavior cloning successfully generalize to real-world scenes.
arXiv Detail & Related papers (2024-10-16T19:59:31Z) - Embodied-RAG: General Non-parametric Embodied Memory for Retrieval and Generation [65.23793829741014]
Embodied-RAG is a framework that enhances the model of an embodied agent with a non-parametric memory system.
At its core, Embodied-RAG's memory is structured as a semantic forest, storing language descriptions at varying levels of detail.
We demonstrate that Embodied-RAG effectively bridges RAG to the robotics domain, successfully handling over 200 explanation and navigation queries.
arXiv Detail & Related papers (2024-09-26T21:44:11Z) - DISCO: Embodied Navigation and Interaction via Differentiable Scene Semantics and Dual-level Control [53.80518003412016]
Building a general-purpose intelligent home-assistant agent skilled in diverse tasks by human commands is a long-term blueprint of embodied AI research.
We study primitive mobile manipulations for embodied agents, i.e. how to navigate and interact based on an instructed verb-noun pair.
We propose DISCO, which features non-trivial advancements in contextualized scene modeling and efficient controls.
arXiv Detail & Related papers (2024-07-20T05:39:28Z) - ROS-LLM: A ROS framework for embodied AI with task feedback and structured reasoning [74.58666091522198]
We present a framework for intuitive robot programming by non-experts.
We leverage natural language prompts and contextual information from the Robot Operating System (ROS)
Our system integrates large language models (LLMs), enabling non-experts to articulate task requirements to the system through a chat interface.
arXiv Detail & Related papers (2024-06-28T08:28:38Z) - Cognitive Planning for Object Goal Navigation using Generative AI Models [0.979851640406258]
We present a novel framework for solving the object goal navigation problem that generates efficient exploration strategies.
Our approach enables a robot to navigate unfamiliar environments by leveraging Large Language Models (LLMs) and Large Vision-Language Models (LVLMs)
arXiv Detail & Related papers (2024-03-30T10:54:59Z) - RoboScript: Code Generation for Free-Form Manipulation Tasks across Real
and Simulation [77.41969287400977]
This paper presents textbfRobotScript, a platform for a deployable robot manipulation pipeline powered by code generation.
We also present a benchmark for a code generation benchmark for robot manipulation tasks in free-form natural language.
We demonstrate the adaptability of our code generation framework across multiple robot embodiments, including the Franka and UR5 robot arms.
arXiv Detail & Related papers (2024-02-22T15:12:00Z) - Interactive Planning Using Large Language Models for Partially
Observable Robotics Tasks [54.60571399091711]
Large Language Models (LLMs) have achieved impressive results in creating robotic agents for performing open vocabulary tasks.
We present an interactive planning technique for partially observable tasks using LLMs.
arXiv Detail & Related papers (2023-12-11T22:54:44Z) - WALL-E: Embodied Robotic WAiter Load Lifting with Large Language Model [92.90127398282209]
This paper investigates the potential of integrating the most recent Large Language Models (LLMs) and existing visual grounding and robotic grasping system.
We introduce the WALL-E (Embodied Robotic WAiter load lifting with Large Language model) as an example of this integration.
We deploy this LLM-empowered system on the physical robot to provide a more user-friendly interface for the instruction-guided grasping task.
arXiv Detail & Related papers (2023-08-30T11:35:21Z) - Learning Hierarchical Interactive Multi-Object Search for Mobile
Manipulation [10.21450780640562]
We introduce a novel interactive multi-object search task in which a robot has to open doors to navigate rooms and search inside cabinets and drawers to find target objects.
These new challenges require combining manipulation and navigation skills in unexplored environments.
We present HIMOS, a hierarchical reinforcement learning approach that learns to compose exploration, navigation, and manipulation skills.
arXiv Detail & Related papers (2023-07-12T12:25:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.