Learning Equality Constraints for Motion Planning on Manifolds
- URL: http://arxiv.org/abs/2009.11852v1
- Date: Thu, 24 Sep 2020 17:54:28 GMT
- Title: Learning Equality Constraints for Motion Planning on Manifolds
- Authors: Giovanni Sutanto, Isabel M. Rayas Fern\'andez, Peter Englert, Ragesh
K. Ramachandran, Gaurav S. Sukhatme
- Abstract summary: We consider the problem of learning representations of constraints from demonstrations with a deep neural network.
The key idea is to learn a level-set function of the constraint suitable for integration into a constrained sampling-based motion planner.
We combine both learned constraints and analytically described constraints into the planner and use a projection-based strategy to find valid points.
- Score: 10.65436139155865
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Constrained robot motion planning is a widely used technique to solve complex
robot tasks. We consider the problem of learning representations of constraints
from demonstrations with a deep neural network, which we call Equality
Constraint Manifold Neural Network (ECoMaNN). The key idea is to learn a
level-set function of the constraint suitable for integration into a
constrained sampling-based motion planner. Learning proceeds by aligning
subspaces in the network with subspaces of the data. We combine both learned
constraints and analytically described constraints into the planner and use a
projection-based strategy to find valid points. We evaluate ECoMaNN on its
representation capabilities of constraint manifolds, the impact of its
individual loss terms, and the motions produced when incorporated into a
planner.
Related papers
- Deep Neural Network for Constraint Acquisition through Tailored Loss
Function [0.0]
The significance of learning constraints from data is underscored by its potential applications in real-world problem-solving.
This work introduces a novel approach grounded in Deep Neural Network (DNN) based on Symbolic Regression.
arXiv Detail & Related papers (2024-03-04T13:47:33Z) - Unified Task and Motion Planning using Object-centric Abstractions of
Motion Constraints [56.283944756315066]
We propose an alternative TAMP approach that unifies task and motion planning into a single search.
Our approach is based on an object-centric abstraction of motion constraints that permits leveraging the computational efficiency of off-the-shelf AI search to yield physically feasible plans.
arXiv Detail & Related papers (2023-12-29T14:00:20Z) - Neural Fields with Hard Constraints of Arbitrary Differential Order [61.49418682745144]
We develop a series of approaches for enforcing hard constraints on neural fields.
The constraints can be specified as a linear operator applied to the neural field and its derivatives.
Our approaches are demonstrated in a wide range of real-world applications.
arXiv Detail & Related papers (2023-06-15T08:33:52Z) - Fast Kinodynamic Planning on the Constraint Manifold with Deep Neural
Networks [29.239926645660823]
This paper introduces a novel learning-to-plan framework that exploits the concept of constraint manifold.
Our approach generates plans satisfying an arbitrary set of constraints and computes them in a short constant time, namely the inference time of a neural network.
We validate our approach on two simulated tasks and in a demanding real-world scenario, where we use a Kuka LBR Iiwa 14 robotic arm to perform the hitting movement in robotic Air Hockey.
arXiv Detail & Related papers (2023-01-11T06:54:11Z) - Deep Learning Techniques for Visual Counting [0.13537117504260618]
We investigated and enhanced Deep Learning (DL) techniques for counting objects in still images or video frames.
In particular, we tackled the challenge related to the lack of data needed for training current DL-based solutions.
arXiv Detail & Related papers (2022-06-07T06:20:40Z) - Semantic Probabilistic Layers for Neuro-Symbolic Learning [83.25785999205932]
We design a predictive layer for structured-output prediction (SOP)
It can be plugged into any neural network guaranteeing its predictions are consistent with a set of predefined symbolic constraints.
Our Semantic Probabilistic Layer (SPL) can model intricate correlations, and hard constraints, over a structured output space.
arXiv Detail & Related papers (2022-06-01T12:02:38Z) - Neural Network-based Control for Multi-Agent Systems from
Spatio-Temporal Specifications [0.757024681220677]
We use Spatio-Temporal Reach and Escape Logic (STREL) as a specification language.
We map control synthesis problems with STREL specifications to propose a combination of gradient and gradient-based methods to solve such problems.
We develop a machine learning technique that uses the results of the off-line optimizations to train a neural network that gives the control inputs current states.
arXiv Detail & Related papers (2021-04-06T18:08:09Z) - Developing Constrained Neural Units Over Time [81.19349325749037]
This paper focuses on an alternative way of defining Neural Networks, that is different from the majority of existing approaches.
The structure of the neural architecture is defined by means of a special class of constraints that are extended also to the interaction with data.
The proposed theory is cast into the time domain, in which data are presented to the network in an ordered manner.
arXiv Detail & Related papers (2020-09-01T09:07:25Z) - Neural Manipulation Planning on Constraint Manifolds [13.774614900994342]
We present Constrained Motion Planning Networks (CoMPNet), the first neural planner for multimodal kinematic constraints.
We show that CoMPNet solves practical motion planning tasks involving both unconstrained and constrained problems.
It generalizes to new unseen locations of the objects, i.e., not seen during training, in the given environments with high success rates.
arXiv Detail & Related papers (2020-08-09T18:58:10Z) - An Integer Linear Programming Framework for Mining Constraints from Data [81.60135973848125]
We present a general framework for mining constraints from data.
In particular, we consider the inference in structured output prediction as an integer linear programming (ILP) problem.
We show that our approach can learn to solve 9x9 Sudoku puzzles and minimal spanning tree problems from examples without providing the underlying rules.
arXiv Detail & Related papers (2020-06-18T20:09:53Z) - Local Propagation in Constraint-based Neural Network [77.37829055999238]
We study a constraint-based representation of neural network architectures.
We investigate a simple optimization procedure that is well suited to fulfil the so-called architectural constraints.
arXiv Detail & Related papers (2020-02-18T16:47:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.