DiMSam: Diffusion Models as Samplers for Task and Motion Planning under
Partial Observability
- URL: http://arxiv.org/abs/2306.13196v2
- Date: Tue, 3 Oct 2023 23:52:05 GMT
- Title: DiMSam: Diffusion Models as Samplers for Task and Motion Planning under
Partial Observability
- Authors: Xiaolin Fang, Caelan Reed Garrett, Clemens Eppner, Tom\'as
Lozano-P\'erez, Leslie Pack Kaelbling, Dieter Fox
- Abstract summary: Task and Motion Planning (TAMP) approaches are effective at planning long-horizon autonomous robot manipulation.
We propose to overcome these limitations by leveraging deep generative modeling.
We show how the combination of classical TAMP, generative learning, and latent embeddings enables long-horizon constraint-based reasoning.
- Score: 50.38132214102161
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Task and Motion Planning (TAMP) approaches are effective at planning
long-horizon autonomous robot manipulation. However, it can be difficult to
apply them to domains where the environment and its dynamics are not fully
known. We propose to overcome these limitations by leveraging deep generative
modeling, specifically diffusion models, to learn constraints and samplers that
capture these difficult-to-engineer aspects of the planning model. These
learned samplers are composed and combined within a TAMP solver in order to
find action parameter values jointly that satisfy the constraints along a plan.
To tractably make predictions for unseen objects in the environment, we define
these samplers on low-dimensional learned latent embeddings of changing object
state. We evaluate our approach in an articulated object manipulation domain
and show how the combination of classical TAMP, generative learning, and latent
embeddings enables long-horizon constraint-based reasoning. We also apply the
learned sampler in the real world. More details are available at
https://sites.google.com/view/dimsam-tamp
Related papers
- Latent Diffusion Planning for Imitation Learning [78.56207566743154]
Latent Diffusion Planning (LDP) is a modular approach consisting of a planner and inverse dynamics model.
By separating planning from action prediction, LDP can benefit from the denser supervision signals of suboptimal and action-free data.
On simulated visual robotic manipulation tasks, LDP outperforms state-of-the-art imitation learning approaches.
arXiv Detail & Related papers (2025-04-23T17:53:34Z) - Scaling Autonomous Agents via Automatic Reward Modeling And Planning [52.39395405893965]
Large language models (LLMs) have demonstrated remarkable capabilities across a range of tasks.
However, they still struggle with problems requiring multi-step decision-making and environmental feedback.
We propose a framework that can automatically learn a reward model from the environment without human annotations.
arXiv Detail & Related papers (2025-02-17T18:49:25Z) - Predictive Planner for Autonomous Driving with Consistency Models [5.966385886363771]
Trajectory prediction and planning are essential for autonomous vehicles to navigate safely and efficiently in dynamic environments.<n>Recent diffusion-based generative models have shown promise in multi-agent trajectory generation, but their slow sampling is less suitable for high-frequency planning tasks.<n>We leverage the consistency model to build a predictive planner that samples from a joint distribution of ego and surrounding agents, conditioned on the ego vehicle's navigational goal.
arXiv Detail & Related papers (2025-02-12T00:26:01Z) - Multi-Robot Motion Planning with Diffusion Models [22.08293753545732]
We propose a method for generating collision-free multi-robot trajectories.
Our algorithm combines learned diffusion models with classical search-based techniques.
We show how to compose multiple diffusion models to plan in large environments.
arXiv Detail & Related papers (2024-10-04T01:31:13Z) - Interactive Planning Using Large Language Models for Partially
Observable Robotics Tasks [54.60571399091711]
Large Language Models (LLMs) have achieved impressive results in creating robotic agents for performing open vocabulary tasks.
We present an interactive planning technique for partially observable tasks using LLMs.
arXiv Detail & Related papers (2023-12-11T22:54:44Z) - Imitating Task and Motion Planning with Visuomotor Transformers [71.41938181838124]
Task and Motion Planning (TAMP) can autonomously generate large-scale datasets of diverse demonstrations.
In this work, we show that the combination of large-scale datasets generated by TAMP supervisors and flexible Transformer models to fit them is a powerful paradigm for robot manipulation.
We present a novel imitation learning system called OPTIMUS that trains large-scale visuomotor Transformer policies by imitating a TAMP agent.
arXiv Detail & Related papers (2023-05-25T17:58:14Z) - Approximating Constraint Manifolds Using Generative Models for
Sampling-Based Constrained Motion Planning [8.924344714683814]
This paper presents a learning-based sampling strategy for constrained motion planning problems.
We use Conditional Variversaational Autoencoder (CVAE) and Conditional Generative Adrial Net (CGAN) to generate constraint-satisfying sample configurations.
We evaluate the efficiency of these two generative models in terms of their sampling accuracy and coverage of sampling distribution.
arXiv Detail & Related papers (2022-04-14T07:08:30Z) - SAGE: Generating Symbolic Goals for Myopic Models in Deep Reinforcement
Learning [18.37286885057802]
We propose an algorithm combining learning and planning to exploit a previously unusable class of incomplete models.
This combines the strengths of symbolic planning and neural learning approaches in a novel way that outperforms competing methods on variations of taxi world and Minecraft.
arXiv Detail & Related papers (2022-03-09T22:55:53Z) - Evaluating model-based planning and planner amortization for continuous
control [79.49319308600228]
We take a hybrid approach, combining model predictive control (MPC) with a learned model and model-free policy learning.
We find that well-tuned model-free agents are strong baselines even for high DoF control problems.
We show that it is possible to distil a model-based planner into a policy that amortizes the planning without any loss of performance.
arXiv Detail & Related papers (2021-10-07T12:00:40Z) - Learning Models as Functionals of Signed-Distance Fields for
Manipulation Planning [51.74463056899926]
This work proposes an optimization-based manipulation planning framework where the objectives are learned functionals of signed-distance fields that represent objects in the scene.
We show that representing objects as signed-distance fields not only enables to learn and represent a variety of models with higher accuracy compared to point-cloud and occupancy measure representations.
arXiv Detail & Related papers (2021-10-02T12:36:58Z) - Learning Symbolic Operators for Task and Motion Planning [29.639902380586253]
integrated task and motion planners (TAMP) handle the complex interaction between motion-level decisions and task-level plan feasibility.
TAMP approaches rely on domain-specific symbolic operators to guide the task-level search, making planning efficient.
We propose a bottom-up relational learning method for operator learning and show how the learned operators can be used for planning in a TAMP system.
arXiv Detail & Related papers (2021-02-28T19:08:56Z) - Conditional Generative Modeling via Learning the Latent Space [54.620761775441046]
We propose a novel framework for conditional generation in multimodal spaces.
It uses latent variables to model generalizable learning patterns.
At inference, the latent variables are optimized to find optimal solutions corresponding to multiple output modes.
arXiv Detail & Related papers (2020-10-07T03:11:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.