Reasoning with Scene Graphs for Robot Planning under Partial
Observability
- URL: http://arxiv.org/abs/2202.10432v1
- Date: Mon, 21 Feb 2022 18:45:56 GMT
- Title: Reasoning with Scene Graphs for Robot Planning under Partial
Observability
- Authors: Saeid Amiri, Kishan Chandan, Shiqi Zhang
- Abstract summary: We develop an algorithm called scene analysis for robot planning (SARP) that enables robots to reason with visual contextual information.
Experiments have been conducted using multiple 3D environments in simulation, and a dataset collected by a real robot.
- Score: 7.121002367542985
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Robot planning in partially observable domains is difficult, because a robot
needs to estimate the current state and plan actions at the same time. When the
domain includes many objects, reasoning about the objects and their
relationships makes robot planning even more difficult. In this paper, we
develop an algorithm called scene analysis for robot planning (SARP) that
enables robots to reason with visual contextual information toward achieving
long-term goals under uncertainty. SARP constructs scene graphs, a factored
representation of objects and their relations, using images captured from
different positions, and reasons with them to enable context-aware robot
planning under partial observability. Experiments have been conducted using
multiple 3D environments in simulation, and a dataset collected by a real
robot. In comparison to standard robot planning and scene analysis methods, in
a target search domain, SARP improves both efficiency and accuracy in task
completion. Supplementary material can be found at https://tinyurl.com/sarp22
Related papers
- Learning Object Properties Using Robot Proprioception via Differentiable Robot-Object Interaction [52.12746368727368]
Differentiable simulation has become a powerful tool for system identification.
Our approach calibrates object properties by using information from the robot, without relying on data from the object itself.
We demonstrate the effectiveness of our method on a low-cost robotic platform.
arXiv Detail & Related papers (2024-10-04T20:48:38Z) - Planning Robot Placement for Object Grasping [5.327052729563043]
When performing manipulation-based activities such as picking objects, a mobile robot needs to position its base at a location that supports successful execution.
To address this problem, prominent approaches typically rely on costly grasp planners to provide grasp poses for a target object.
We propose instead to first find robot placements that would not result in collision with the environment, then evaluate them to find the best placement candidate.
arXiv Detail & Related papers (2024-05-26T20:57:32Z) - Track2Act: Predicting Point Tracks from Internet Videos enables Generalizable Robot Manipulation [65.46610405509338]
We seek to learn a generalizable goal-conditioned policy that enables zero-shot robot manipulation.
Our framework,Track2Act predicts tracks of how points in an image should move in future time-steps based on a goal.
We show that this approach of combining scalably learned track prediction with a residual policy enables diverse generalizable robot manipulation.
arXiv Detail & Related papers (2024-05-02T17:56:55Z) - Interactive Planning Using Large Language Models for Partially
Observable Robotics Tasks [54.60571399091711]
Large Language Models (LLMs) have achieved impressive results in creating robotic agents for performing open vocabulary tasks.
We present an interactive planning technique for partially observable tasks using LLMs.
arXiv Detail & Related papers (2023-12-11T22:54:44Z) - SG-Bot: Object Rearrangement via Coarse-to-Fine Robotic Imagination on Scene Graphs [81.15889805560333]
We present SG-Bot, a novel rearrangement framework.
SG-Bot exemplifies lightweight, real-time, and user-controllable characteristics.
Experimental results demonstrate that SG-Bot outperforms competitors by a large margin.
arXiv Detail & Related papers (2023-09-21T15:54:33Z) - Generalized Object Search [0.9137554315375919]
This thesis develops methods and systems for (multi-)object search in 3D environments under uncertainty.
I implement a robot-independent, environment-agnostic system for generalized object search in 3D.
I deploy it on the Boston Dynamics Spot robot, the Kinova MOVO robot, and the Universal Robots UR5e robotic arm.
arXiv Detail & Related papers (2023-01-24T16:41:36Z) - Can Foundation Models Perform Zero-Shot Task Specification For Robot
Manipulation? [54.442692221567796]
Task specification is critical for engagement of non-expert end-users and adoption of personalized robots.
A widely studied approach to task specification is through goals, using either compact state vectors or goal images from the same robot scene.
In this work, we explore alternate and more general forms of goal specification that are expected to be easier for humans to specify and use.
arXiv Detail & Related papers (2022-04-23T19:39:49Z) - Single-view robot pose and joint angle estimation via render & compare [40.05546237998603]
We introduce RoboPose, a method to estimate the joint angles and the 6D camera-to-robot pose of a known articulated robot from a single RGB image.
This is an important problem to grant mobile and itinerant autonomous systems the ability to interact with other robots.
arXiv Detail & Related papers (2021-04-19T14:48:29Z) - Projection Mapping Implementation: Enabling Direct Externalization of
Perception Results and Action Intent to Improve Robot Explainability [62.03014078810652]
Existing research on non-verbal cues, e.g., eye gaze or arm movement, may not accurately present a robot's internal states.
Projecting the states directly onto a robot's operating environment has the advantages of being direct, accurate, and more salient.
arXiv Detail & Related papers (2020-10-05T18:16:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.