Fast Kinodynamic Planning on the Constraint Manifold with Deep Neural
Networks
- URL: http://arxiv.org/abs/2301.04330v2
- Date: Thu, 12 Jan 2023 10:36:31 GMT
- Title: Fast Kinodynamic Planning on the Constraint Manifold with Deep Neural
Networks
- Authors: Piotr Kicki, Puze Liu, Davide Tateo, Haitham Bou-Ammar, Krzysztof
Walas, Piotr Skrzypczy\'nski, Jan Peters
- Abstract summary: This paper introduces a novel learning-to-plan framework that exploits the concept of constraint manifold.
Our approach generates plans satisfying an arbitrary set of constraints and computes them in a short constant time, namely the inference time of a neural network.
We validate our approach on two simulated tasks and in a demanding real-world scenario, where we use a Kuka LBR Iiwa 14 robotic arm to perform the hitting movement in robotic Air Hockey.
- Score: 29.239926645660823
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Motion planning is a mature area of research in robotics with many
well-established methods based on optimization or sampling the state space,
suitable for solving kinematic motion planning. However, when dynamic motions
under constraints are needed and computation time is limited, fast kinodynamic
planning on the constraint manifold is indispensable. In recent years,
learning-based solutions have become alternatives to classical approaches, but
they still lack comprehensive handling of complex constraints, such as planning
on a lower-dimensional manifold of the task space while considering the robot's
dynamics. This paper introduces a novel learning-to-plan framework that
exploits the concept of constraint manifold, including dynamics, and neural
planning methods. Our approach generates plans satisfying an arbitrary set of
constraints and computes them in a short constant time, namely the inference
time of a neural network. This allows the robot to plan and replan reactively,
making our approach suitable for dynamic environments. We validate our approach
on two simulated tasks and in a demanding real-world scenario, where we use a
Kuka LBR Iiwa 14 robotic arm to perform the hitting movement in robotic Air
Hockey.
Related papers
- Potential Based Diffusion Motion Planning [73.593988351275]
We propose a new approach towards learning potential based motion planning.
We train a neural network to capture and learn an easily optimizable potentials over motion planning trajectories.
We demonstrate its inherent composability, enabling us to generalize to a multitude of different motion constraints.
arXiv Detail & Related papers (2024-07-08T17:48:39Z) - Unified Task and Motion Planning using Object-centric Abstractions of
Motion Constraints [56.283944756315066]
We propose an alternative TAMP approach that unifies task and motion planning into a single search.
Our approach is based on an object-centric abstraction of motion constraints that permits leveraging the computational efficiency of off-the-shelf AI search to yield physically feasible plans.
arXiv Detail & Related papers (2023-12-29T14:00:20Z) - AI planning in the imagination: High-level planning on learned abstract
search spaces [68.75684174531962]
We propose a new method, called PiZero, that gives an agent the ability to plan in an abstract search space that the agent learns during training.
We evaluate our method on multiple domains, including the traveling salesman problem, Sokoban, 2048, the facility location problem, and Pacman.
arXiv Detail & Related papers (2023-08-16T22:47:16Z) - Leveraging Scene Embeddings for Gradient-Based Motion Planning in Latent
Space [24.95320093765214]
AMP-LS is able to plan in novel, complex scenes while outperforming traditional planning baselines in terms of speed by an order of magnitude.
We show that the resulting system is fast enough to enable closed-loop planning in real-world dynamic scenes.
arXiv Detail & Related papers (2023-03-06T18:49:39Z) - Learning-based Motion Planning in Dynamic Environments Using GNNs and
Temporal Encoding [15.58317292680615]
We propose a GNN-based approach that uses temporal encoding and imitation learning with data aggregation for learning both the embeddings and the edge prioritization policies.
Experiments show that the proposed methods can significantly accelerate online planning over state-of-the-art complete dynamic planning algorithms.
arXiv Detail & Related papers (2022-10-16T01:27:16Z) - Simultaneous Contact-Rich Grasping and Locomotion via Distributed
Optimization Enabling Free-Climbing for Multi-Limbed Robots [60.06216976204385]
We present an efficient motion planning framework for simultaneously solving locomotion, grasping, and contact problems.
We demonstrate our proposed framework in the hardware experiments, showing that the multi-limbed robot is able to realize various motions including free-climbing at a slope angle 45deg with a much shorter planning time.
arXiv Detail & Related papers (2022-07-04T13:52:10Z) - MPC-MPNet: Model-Predictive Motion Planning Networks for Fast,
Near-Optimal Planning under Kinodynamic Constraints [15.608546987158613]
Kinodynamic Motion Planning (KMP) is computation to find a robot motion subject to concurrent kinematics and dynamics constraints.
We present a scalable, imitation learning-based, Model-Predictive Motion Planning Networks framework that finds near-optimal path solutions.
We evaluate our algorithms on a range of cluttered, kinodynamically constrained, and underactuated planning problems with results indicating significant improvements in times, path qualities, and success rates over existing methods.
arXiv Detail & Related papers (2021-01-17T23:07:04Z) - Neural Dynamic Policies for End-to-End Sensorimotor Learning [51.24542903398335]
The current dominant paradigm in sensorimotor control, whether imitation or reinforcement learning, is to train policies directly in raw action spaces.
We propose Neural Dynamic Policies (NDPs) that make predictions in trajectory distribution space.
NDPs outperform the prior state-of-the-art in terms of either efficiency or performance across several robotic control tasks.
arXiv Detail & Related papers (2020-12-04T18:59:32Z) - Constrained Motion Planning Networks X [15.047777217748889]
We present Constrained Motion Planning Networks X (CoMPNetX)
It is a neural planning approach, comprising a conditional deep neural generator and discriminator with neural gradients-based fast projection operator.
We show that our method finds path solutions with high success rates and lower times than state-of-the-art traditional path-finding tools.
arXiv Detail & Related papers (2020-10-17T03:34:38Z) - Neural Manipulation Planning on Constraint Manifolds [13.774614900994342]
We present Constrained Motion Planning Networks (CoMPNet), the first neural planner for multimodal kinematic constraints.
We show that CoMPNet solves practical motion planning tasks involving both unconstrained and constrained problems.
It generalizes to new unseen locations of the objects, i.e., not seen during training, in the given environments with high success rates.
arXiv Detail & Related papers (2020-08-09T18:58:10Z) - Thinking While Moving: Deep Reinforcement Learning with Concurrent
Control [122.49572467292293]
We study reinforcement learning in settings where sampling an action from the policy must be done concurrently with the time evolution of the controlled system.
Much like a person or an animal, the robot must think and move at the same time, deciding on its next action before the previous one has completed.
arXiv Detail & Related papers (2020-04-13T17:49:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.