Neural Manipulation Planning on Constraint Manifolds
- URL: http://arxiv.org/abs/2008.03787v1
- Date: Sun, 9 Aug 2020 18:58:10 GMT
- Title: Neural Manipulation Planning on Constraint Manifolds
- Authors: Ahmed H. Qureshi, Jiangeng Dong, Austin Choe, and Michael C. Yip
- Abstract summary: We present Constrained Motion Planning Networks (CoMPNet), the first neural planner for multimodal kinematic constraints.
We show that CoMPNet solves practical motion planning tasks involving both unconstrained and constrained problems.
It generalizes to new unseen locations of the objects, i.e., not seen during training, in the given environments with high success rates.
- Score: 13.774614900994342
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The presence of task constraints imposes a significant challenge to motion
planning. Despite all recent advancements, existing algorithms are still
computationally expensive for most planning problems. In this paper, we present
Constrained Motion Planning Networks (CoMPNet), the first neural planner for
multimodal kinematic constraints. Our approach comprises the following
components: i) constraint and environment perception encoders; ii) neural robot
configuration generator that outputs configurations on/near the constraint
manifold(s), and iii) a bidirectional planning algorithm that takes the
generated configurations to create a feasible robot motion trajectory. We show
that CoMPNet solves practical motion planning tasks involving both
unconstrained and constrained problems. Furthermore, it generalizes to new
unseen locations of the objects, i.e., not seen during training, in the given
environments with high success rates. When compared to the state-of-the-art
constrained motion planning algorithms, CoMPNet outperforms by order of
magnitude improvement in computational speed with a significantly lower
variance.
Related papers
- A Meta-Engine Framework for Interleaved Task and Motion Planning using Topological Refinements [51.54559117314768]
Task And Motion Planning (TAMP) is the problem of finding a solution to an automated planning problem.
We propose a general and open-source framework for modeling and benchmarking TAMP problems.
We introduce an innovative meta-technique to solve TAMP problems involving moving agents and multiple task-state-dependent obstacles.
arXiv Detail & Related papers (2024-08-11T14:57:57Z) - Potential Based Diffusion Motion Planning [73.593988351275]
We propose a new approach towards learning potential based motion planning.
We train a neural network to capture and learn an easily optimizable potentials over motion planning trajectories.
We demonstrate its inherent composability, enabling us to generalize to a multitude of different motion constraints.
arXiv Detail & Related papers (2024-07-08T17:48:39Z) - Unified Task and Motion Planning using Object-centric Abstractions of
Motion Constraints [56.283944756315066]
We propose an alternative TAMP approach that unifies task and motion planning into a single search.
Our approach is based on an object-centric abstraction of motion constraints that permits leveraging the computational efficiency of off-the-shelf AI search to yield physically feasible plans.
arXiv Detail & Related papers (2023-12-29T14:00:20Z) - Planning as In-Painting: A Diffusion-Based Embodied Task Planning
Framework for Environments under Uncertainty [56.30846158280031]
Task planning for embodied AI has been one of the most challenging problems.
We propose a task-agnostic method named 'planning as in-painting'
The proposed framework achieves promising performances in various embodied AI tasks.
arXiv Detail & Related papers (2023-12-02T10:07:17Z) - Fast Kinodynamic Planning on the Constraint Manifold with Deep Neural
Networks [29.239926645660823]
This paper introduces a novel learning-to-plan framework that exploits the concept of constraint manifold.
Our approach generates plans satisfying an arbitrary set of constraints and computes them in a short constant time, namely the inference time of a neural network.
We validate our approach on two simulated tasks and in a demanding real-world scenario, where we use a Kuka LBR Iiwa 14 robotic arm to perform the hitting movement in robotic Air Hockey.
arXiv Detail & Related papers (2023-01-11T06:54:11Z) - Learning-based Motion Planning in Dynamic Environments Using GNNs and
Temporal Encoding [15.58317292680615]
We propose a GNN-based approach that uses temporal encoding and imitation learning with data aggregation for learning both the embeddings and the edge prioritization policies.
Experiments show that the proposed methods can significantly accelerate online planning over state-of-the-art complete dynamic planning algorithms.
arXiv Detail & Related papers (2022-10-16T01:27:16Z) - Simultaneous Contact-Rich Grasping and Locomotion via Distributed
Optimization Enabling Free-Climbing for Multi-Limbed Robots [60.06216976204385]
We present an efficient motion planning framework for simultaneously solving locomotion, grasping, and contact problems.
We demonstrate our proposed framework in the hardware experiments, showing that the multi-limbed robot is able to realize various motions including free-climbing at a slope angle 45deg with a much shorter planning time.
arXiv Detail & Related papers (2022-07-04T13:52:10Z) - MPC-MPNet: Model-Predictive Motion Planning Networks for Fast,
Near-Optimal Planning under Kinodynamic Constraints [15.608546987158613]
Kinodynamic Motion Planning (KMP) is computation to find a robot motion subject to concurrent kinematics and dynamics constraints.
We present a scalable, imitation learning-based, Model-Predictive Motion Planning Networks framework that finds near-optimal path solutions.
We evaluate our algorithms on a range of cluttered, kinodynamically constrained, and underactuated planning problems with results indicating significant improvements in times, path qualities, and success rates over existing methods.
arXiv Detail & Related papers (2021-01-17T23:07:04Z) - Constrained Motion Planning Networks X [15.047777217748889]
We present Constrained Motion Planning Networks X (CoMPNetX)
It is a neural planning approach, comprising a conditional deep neural generator and discriminator with neural gradients-based fast projection operator.
We show that our method finds path solutions with high success rates and lower times than state-of-the-art traditional path-finding tools.
arXiv Detail & Related papers (2020-10-17T03:34:38Z) - Learning Equality Constraints for Motion Planning on Manifolds [10.65436139155865]
We consider the problem of learning representations of constraints from demonstrations with a deep neural network.
The key idea is to learn a level-set function of the constraint suitable for integration into a constrained sampling-based motion planner.
We combine both learned constraints and analytically described constraints into the planner and use a projection-based strategy to find valid points.
arXiv Detail & Related papers (2020-09-24T17:54:28Z) - Jump Operator Planning: Goal-Conditioned Policy Ensembles and Zero-Shot
Transfer [71.44215606325005]
We propose a novel framework called Jump-Operator Dynamic Programming for quickly computing solutions within a super-exponential space of sequential sub-goal tasks.
This approach involves controlling over an ensemble of reusable goal-conditioned polices functioning as temporally extended actions.
We then identify classes of objective functions on this subspace whose solutions are invariant to the grounding, resulting in optimal zero-shot transfer.
arXiv Detail & Related papers (2020-07-06T05:13:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.