Physics-informed Neural Motion Planning on Constraint Manifolds
- URL: http://arxiv.org/abs/2403.05765v1
- Date: Sat, 9 Mar 2024 02:24:02 GMT
- Title: Physics-informed Neural Motion Planning on Constraint Manifolds
- Authors: Ruiqi Ni and Ahmed H. Qureshi
- Abstract summary: Constrained Motion Planning (CMP) aims to find a collision-free path between the given start and goal configurations on the kinematic constraint manifold.
We propose the first physics-informed CMP framework that solves the Eikonal equation on the constraint manifold and trains neural function for CMP without expert data.
Our results show that the proposed approach efficiently solves various CMP problems in both simulation and real-world, including object manipulation under orientation constraints and door opening with a high-dimensional 6-DOF robot manipulator.
- Score: 6.439800184169697
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Constrained Motion Planning (CMP) aims to find a collision-free path between
the given start and goal configurations on the kinematic constraint manifolds.
These problems appear in various scenarios ranging from object manipulation to
legged-robot locomotion. However, the zero-volume nature of manifolds makes the
CMP problem challenging, and the state-of-the-art methods still take several
seconds to find a path and require a computationally expansive path dataset for
imitation learning. Recently, physics-informed motion planning methods have
emerged that directly solve the Eikonal equation through neural networks for
motion planning and do not require expert demonstrations for learning. Inspired
by these approaches, we propose the first physics-informed CMP framework that
solves the Eikonal equation on the constraint manifolds and trains neural
function for CMP without expert data. Our results show that the proposed
approach efficiently solves various CMP problems in both simulation and
real-world, including object manipulation under orientation constraints and
door opening with a high-dimensional 6-DOF robot manipulator. In these complex
settings, our method exhibits high success rates and finds paths in
sub-seconds, which is many times faster than the state-of-the-art CMP methods.
Related papers
- Trajectory Manifold Optimization for Fast and Adaptive Kinodynamic Motion Planning [5.982922468400902]
Fast kinodynamic motion planning is crucial for systems to adapt to dynamically changing environments.
We propose a novel neural network model, it Differentiable Motion Manifold Primitives (DMMP), along with a practical training strategy.
Experiments with a 7-DoF robot arm tasked with dynamic throwing to arbitrary target positions demonstrate that our method surpasses existing approaches in planning speed, task success, and constraint satisfaction.
arXiv Detail & Related papers (2024-10-16T03:29:33Z) - DeNoising-MOT: Towards Multiple Object Tracking with Severe Occlusions [52.63323657077447]
We propose DNMOT, an end-to-end trainable DeNoising Transformer for multiple object tracking.
Specifically, we augment the trajectory with noises during training and make our model learn the denoising process in an encoder-decoder architecture.
We conduct extensive experiments on the MOT17, MOT20, and DanceTrack datasets, and the experimental results show that our method outperforms previous state-of-the-art methods by a clear margin.
arXiv Detail & Related papers (2023-09-09T04:40:01Z) - Progressive Learning for Physics-informed Neural Motion Planning [1.9798034349981157]
Motion planning is one of the core robotics problems requiring fast methods for finding a collision-free robot motion path.
Recent advancements have led to a physics-informed NMP approach that directly solves the Eikonal equation for motion planning.
This paper presents a novel and tractable Eikonal equation formulation and introduces a new progressive learning strategy to train neural networks without expert data.
arXiv Detail & Related papers (2023-06-01T12:41:05Z) - Fast Kinodynamic Planning on the Constraint Manifold with Deep Neural
Networks [29.239926645660823]
This paper introduces a novel learning-to-plan framework that exploits the concept of constraint manifold.
Our approach generates plans satisfying an arbitrary set of constraints and computes them in a short constant time, namely the inference time of a neural network.
We validate our approach on two simulated tasks and in a demanding real-world scenario, where we use a Kuka LBR Iiwa 14 robotic arm to perform the hitting movement in robotic Air Hockey.
arXiv Detail & Related papers (2023-01-11T06:54:11Z) - Guaranteed Conservation of Momentum for Learning Particle-based Fluid
Dynamics [96.9177297872723]
We present a novel method for guaranteeing linear momentum in learned physics simulations.
We enforce conservation of momentum with a hard constraint, which we realize via antisymmetrical continuous convolutional layers.
In combination, the proposed method allows us to increase the physical accuracy of the learned simulator substantially.
arXiv Detail & Related papers (2022-10-12T09:12:59Z) - NTFields: Neural Time Fields for Physics-Informed Robot Motion Planning [1.9798034349981157]
We propose Neural Time Fields (NTFields) for robot motion planning in cluttered scenarios.
Our framework represents a wave propagation model generating continuous arrival time to find path solutions informed by a nonlinear first-order PDE called Eikonal Equation.
We evaluate our method in various cluttered 3D environments, including the Gibson dataset, and demonstrate its ability to solve motion planning problems for 4-DOF and 6-DOF robot manipulators.
arXiv Detail & Related papers (2022-09-30T22:34:54Z) - Multi-Objective Policy Gradients with Topological Constraints [108.10241442630289]
We present a new algorithm for a policy gradient in TMDPs by a simple extension of the proximal policy optimization (PPO) algorithm.
We demonstrate this on a real-world multiple-objective navigation problem with an arbitrary ordering of objectives both in simulation and on a real robot.
arXiv Detail & Related papers (2022-09-15T07:22:58Z) - Simultaneous Contact-Rich Grasping and Locomotion via Distributed
Optimization Enabling Free-Climbing for Multi-Limbed Robots [60.06216976204385]
We present an efficient motion planning framework for simultaneously solving locomotion, grasping, and contact problems.
We demonstrate our proposed framework in the hardware experiments, showing that the multi-limbed robot is able to realize various motions including free-climbing at a slope angle 45deg with a much shorter planning time.
arXiv Detail & Related papers (2022-07-04T13:52:10Z) - Neural Motion Fields: Encoding Grasp Trajectories as Implicit Value
Functions [65.84090965167535]
We present Neural Motion Fields, a novel object representation which encodes both object point clouds and the relative task trajectories as an implicit value function parameterized by a neural network.
This object-centric representation models a continuous distribution over the SE(3) space and allows us to perform grasping reactively by leveraging sampling-based MPC to optimize this value function.
arXiv Detail & Related papers (2022-06-29T18:47:05Z) - MPC-MPNet: Model-Predictive Motion Planning Networks for Fast,
Near-Optimal Planning under Kinodynamic Constraints [15.608546987158613]
Kinodynamic Motion Planning (KMP) is computation to find a robot motion subject to concurrent kinematics and dynamics constraints.
We present a scalable, imitation learning-based, Model-Predictive Motion Planning Networks framework that finds near-optimal path solutions.
We evaluate our algorithms on a range of cluttered, kinodynamically constrained, and underactuated planning problems with results indicating significant improvements in times, path qualities, and success rates over existing methods.
arXiv Detail & Related papers (2021-01-17T23:07:04Z) - End-to-end Learning for Inter-Vehicle Distance and Relative Velocity
Estimation in ADAS with a Monocular Camera [81.66569124029313]
We propose a camera-based inter-vehicle distance and relative velocity estimation method based on end-to-end training of a deep neural network.
The key novelty of our method is the integration of multiple visual clues provided by any two time-consecutive monocular frames.
We also propose a vehicle-centric sampling mechanism to alleviate the effect of perspective distortion in the motion field.
arXiv Detail & Related papers (2020-06-07T08:18:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.