Predicting Motion Plans for Articulating Everyday Objects
- URL: http://arxiv.org/abs/2303.01484v1
- Date: Thu, 2 Mar 2023 18:45:02 GMT
- Title: Predicting Motion Plans for Articulating Everyday Objects
- Authors: Arjun Gupta, Max E. Shepherd, Saurabh Gupta
- Abstract summary: We develop a motion simulator that simulates articulated objects placed in real scenes.
We then introduce SeqIK+$theta$, a fast and flexible representation for motion plans.
We learn models that use SeqIK+$theta$ to quickly predict motion plans for articulating novel objects at test time.
- Score: 16.0496453009462
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mobile manipulation tasks such as opening a door, pulling open a drawer, or
lifting a toilet lid require constrained motion of the end-effector under
environmental and task constraints. This, coupled with partial information in
novel environments, makes it challenging to employ classical motion planning
approaches at test time. Our key insight is to cast it as a learning problem to
leverage past experience of solving similar planning problems to directly
predict motion plans for mobile manipulation tasks in novel situations at test
time. To enable this, we develop a simulator, ArtObjSim, that simulates
articulated objects placed in real scenes. We then introduce SeqIK+$\theta_0$,
a fast and flexible representation for motion plans. Finally, we learn models
that use SeqIK+$\theta_0$ to quickly predict motion plans for articulating
novel objects at test time. Experimental evaluation shows improved speed and
accuracy at generating motion plans than pure search-based methods and pure
learning methods.
Related papers
- Neural MP: A Generalist Neural Motion Planner [75.82675575009077]
We seek to do the same by applying data-driven learning at scale to the problem of motion planning.
Our approach builds a large number of complex scenes in simulation, collects expert data from a motion planner, then distills it into a reactive generalist policy.
We perform a thorough evaluation of our method on 64 motion planning tasks across four diverse environments.
arXiv Detail & Related papers (2024-09-09T17:59:45Z) - HYPERmotion: Learning Hybrid Behavior Planning for Autonomous Loco-manipulation [7.01404330241523]
HYPERmotion is a framework that learns, selects and plans behaviors based on tasks in different scenarios.
We combine reinforcement learning with whole-body optimization to generate motion for 38 actuated joints.
Experiments in simulation and real-world show that learned motions can efficiently adapt to new tasks.
arXiv Detail & Related papers (2024-06-20T18:21:24Z) - Task and Motion Planning for Execution in the Real [24.01204729304763]
This work generates task and motion plans that include actions cannot be fully grounded at planning time.
Execution combines offline planned motions and online behaviors till reaching the task goal.
Forty real-robot trials and motivating demonstrations are performed to evaluate the proposed framework.
Results show faster execution time, less number of actions, and more success in problems where diverse gaps arise.
arXiv Detail & Related papers (2024-06-05T22:30:40Z) - Neural Implicit Swept Volume Models for Fast Collision Detection [0.0]
We present an algorithm combining the speed of the deep learning-based signed distance computations with the strong accuracy guarantees of geometric collision checkers.
We validate our approach in simulated and real-world robotic experiments, and demonstrate that it is able to speed up a commercial bin picking application.
arXiv Detail & Related papers (2024-02-23T12:06:48Z) - RoboTAP: Tracking Arbitrary Points for Few-Shot Visual Imitation [36.43143326197769]
Track-Any-Point (TAP) models isolate the relevant motion in a demonstration, and parameterize a low-level controller to reproduce this motion across changes in the scene configuration.
We show this results in robust robot policies that can solve complex object-arrangement tasks such as shape-matching, stacking, and even full path-following tasks such as applying glue and sticking objects together.
arXiv Detail & Related papers (2023-08-30T11:57:04Z) - AI planning in the imagination: High-level planning on learned abstract
search spaces [68.75684174531962]
We propose a new method, called PiZero, that gives an agent the ability to plan in an abstract search space that the agent learns during training.
We evaluate our method on multiple domains, including the traveling salesman problem, Sokoban, 2048, the facility location problem, and Pacman.
arXiv Detail & Related papers (2023-08-16T22:47:16Z) - H-SAUR: Hypothesize, Simulate, Act, Update, and Repeat for Understanding
Object Articulations from Interactions [62.510951695174604]
"Hypothesize, Simulate, Act, Update, and Repeat" (H-SAUR) is a probabilistic generative framework that generates hypotheses about how objects articulate given input observations.
We show that the proposed model significantly outperforms the current state-of-the-art articulated object manipulation framework.
We further improve the test-time efficiency of H-SAUR by integrating a learned prior from learning-based vision models.
arXiv Detail & Related papers (2022-10-22T18:39:33Z) - DiffSkill: Skill Abstraction from Differentiable Physics for Deformable
Object Manipulations with Tools [96.38972082580294]
DiffSkill is a novel framework that uses a differentiable physics simulator for skill abstraction to solve deformable object manipulation tasks.
In particular, we first obtain short-horizon skills using individual tools from a gradient-based simulator.
We then learn a neural skill abstractor from the demonstration trajectories which takes RGBD images as input.
arXiv Detail & Related papers (2022-03-31T17:59:38Z) - A Differentiable Recipe for Learning Visual Non-Prehensile Planar
Manipulation [63.1610540170754]
We focus on the problem of visual non-prehensile planar manipulation.
We propose a novel architecture that combines video decoding neural models with priors from contact mechanics.
We find that our modular and fully differentiable architecture performs better than learning-only methods on unseen objects and motions.
arXiv Detail & Related papers (2021-11-09T18:39:45Z) - AMP: Adversarial Motion Priors for Stylized Physics-Based Character
Control [145.61135774698002]
We propose a fully automated approach to selecting motion for a character to track in a given scenario.
High-level task objectives that the character should perform can be specified by relatively simple reward functions.
Low-level style of the character's behaviors can be specified by a dataset of unstructured motion clips.
Our system produces high-quality motions comparable to those achieved by state-of-the-art tracking-based techniques.
arXiv Detail & Related papers (2021-04-05T22:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.