Constrained Motion Planning Networks X
- URL: http://arxiv.org/abs/2010.08707v2
- Date: Sat, 3 Jul 2021 22:32:12 GMT
- Title: Constrained Motion Planning Networks X
- Authors: Ahmed H. Qureshi, Jiangeng Dong, Asfiya Baig and Michael C. Yip
- Abstract summary: We present Constrained Motion Planning Networks X (CoMPNetX)
It is a neural planning approach, comprising a conditional deep neural generator and discriminator with neural gradients-based fast projection operator.
We show that our method finds path solutions with high success rates and lower times than state-of-the-art traditional path-finding tools.
- Score: 15.047777217748889
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Constrained motion planning is a challenging field of research, aiming for
computationally efficient methods that can find a collision-free path on the
constraint manifolds between a given start and goal configuration. These
planning problems come up surprisingly frequently, such as in robot
manipulation for performing daily life assistive tasks. However, few solutions
to constrained motion planning are available, and those that exist struggle
with high computational time complexity in finding a path solution on the
manifolds. To address this challenge, we present Constrained Motion Planning
Networks X (CoMPNetX). It is a neural planning approach, comprising a
conditional deep neural generator and discriminator with neural gradients-based
fast projection operator. We also introduce neural task and scene
representations conditioned on which the CoMPNetX generates implicit manifold
configurations to turbo-charge any underlying classical planner such as
Sampling-based Motion Planning methods for quickly solving complex constrained
planning tasks. We show that our method finds path solutions with high success
rates and lower computation times than state-of-the-art traditional
path-finding tools on various challenging scenarios.
Related papers
- Trajectory Manifold Optimization for Fast and Adaptive Kinodynamic Motion Planning [5.982922468400902]
Fast kinodynamic motion planning is crucial for systems to adapt to dynamically changing environments.
We propose a novel neural network model, it Differentiable Motion Manifold Primitives (DMMP), along with a practical training strategy.
Experiments with a 7-DoF robot arm tasked with dynamic throwing to arbitrary target positions demonstrate that our method surpasses existing approaches in planning speed, task success, and constraint satisfaction.
arXiv Detail & Related papers (2024-10-16T03:29:33Z) - A Meta-Engine Framework for Interleaved Task and Motion Planning using Topological Refinements [51.54559117314768]
Task And Motion Planning (TAMP) is the problem of finding a solution to an automated planning problem.
We propose a general and open-source framework for modeling and benchmarking TAMP problems.
We introduce an innovative meta-technique to solve TAMP problems involving moving agents and multiple task-state-dependent obstacles.
arXiv Detail & Related papers (2024-08-11T14:57:57Z) - Learning Logic Specifications for Policy Guidance in POMDPs: an
Inductive Logic Programming Approach [57.788675205519986]
We learn high-quality traces from POMDP executions generated by any solver.
We exploit data- and time-efficient Indu Logic Programming (ILP) to generate interpretable belief-based policy specifications.
We show that learneds expressed in Answer Set Programming (ASP) yield performance superior to neural networks and similar to optimal handcrafted task-specifics within lower computational time.
arXiv Detail & Related papers (2024-02-29T15:36:01Z) - Unified Task and Motion Planning using Object-centric Abstractions of
Motion Constraints [56.283944756315066]
We propose an alternative TAMP approach that unifies task and motion planning into a single search.
Our approach is based on an object-centric abstraction of motion constraints that permits leveraging the computational efficiency of off-the-shelf AI search to yield physically feasible plans.
arXiv Detail & Related papers (2023-12-29T14:00:20Z) - Planning as In-Painting: A Diffusion-Based Embodied Task Planning
Framework for Environments under Uncertainty [56.30846158280031]
Task planning for embodied AI has been one of the most challenging problems.
We propose a task-agnostic method named 'planning as in-painting'
The proposed framework achieves promising performances in various embodied AI tasks.
arXiv Detail & Related papers (2023-12-02T10:07:17Z) - Fast Kinodynamic Planning on the Constraint Manifold with Deep Neural
Networks [29.239926645660823]
This paper introduces a novel learning-to-plan framework that exploits the concept of constraint manifold.
Our approach generates plans satisfying an arbitrary set of constraints and computes them in a short constant time, namely the inference time of a neural network.
We validate our approach on two simulated tasks and in a demanding real-world scenario, where we use a Kuka LBR Iiwa 14 robotic arm to perform the hitting movement in robotic Air Hockey.
arXiv Detail & Related papers (2023-01-11T06:54:11Z) - Simultaneous Contact-Rich Grasping and Locomotion via Distributed
Optimization Enabling Free-Climbing for Multi-Limbed Robots [60.06216976204385]
We present an efficient motion planning framework for simultaneously solving locomotion, grasping, and contact problems.
We demonstrate our proposed framework in the hardware experiments, showing that the multi-limbed robot is able to realize various motions including free-climbing at a slope angle 45deg with a much shorter planning time.
arXiv Detail & Related papers (2022-07-04T13:52:10Z) - Anytime Stochastic Task and Motion Policies [12.72186877599064]
We present a new approach for integrated task and motion planning in settings.
Our algorithm is probabilistically complete and can compute feasible solution policies in an anytime fashion.
arXiv Detail & Related papers (2021-08-28T00:23:39Z) - MPC-MPNet: Model-Predictive Motion Planning Networks for Fast,
Near-Optimal Planning under Kinodynamic Constraints [15.608546987158613]
Kinodynamic Motion Planning (KMP) is computation to find a robot motion subject to concurrent kinematics and dynamics constraints.
We present a scalable, imitation learning-based, Model-Predictive Motion Planning Networks framework that finds near-optimal path solutions.
We evaluate our algorithms on a range of cluttered, kinodynamically constrained, and underactuated planning problems with results indicating significant improvements in times, path qualities, and success rates over existing methods.
arXiv Detail & Related papers (2021-01-17T23:07:04Z) - Neural Manipulation Planning on Constraint Manifolds [13.774614900994342]
We present Constrained Motion Planning Networks (CoMPNet), the first neural planner for multimodal kinematic constraints.
We show that CoMPNet solves practical motion planning tasks involving both unconstrained and constrained problems.
It generalizes to new unseen locations of the objects, i.e., not seen during training, in the given environments with high success rates.
arXiv Detail & Related papers (2020-08-09T18:58:10Z) - Jump Operator Planning: Goal-Conditioned Policy Ensembles and Zero-Shot
Transfer [71.44215606325005]
We propose a novel framework called Jump-Operator Dynamic Programming for quickly computing solutions within a super-exponential space of sequential sub-goal tasks.
This approach involves controlling over an ensemble of reusable goal-conditioned polices functioning as temporally extended actions.
We then identify classes of objective functions on this subspace whose solutions are invariant to the grounding, resulting in optimal zero-shot transfer.
arXiv Detail & Related papers (2020-07-06T05:13:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.