Constant-time Motion Planning with Anytime Refinement for Manipulation
- URL: http://arxiv.org/abs/2311.00837v1
- Date: Wed, 1 Nov 2023 20:40:10 GMT
- Title: Constant-time Motion Planning with Anytime Refinement for Manipulation
- Authors: Itamar Mishani, Hayden Feddock, Maxim Likhachev
- Abstract summary: We propose an anytime refinement approach that works in combination with constant-time motion planners (CTMP) algorithms.
Our proposed framework, as it operates as a constant time algorithm, rapidly generates an initial solution within a user-defined time threshold.
functioning as an anytime algorithm, it iteratively refines the solution's quality within the allocated time budget.
- Score: 19.717413012382714
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robotic manipulators are essential for future autonomous systems, yet limited
trust in their autonomy has confined them to rigid, task-specific systems. The
intricate configuration space of manipulators, coupled with the challenges of
obstacle avoidance and constraint satisfaction, often makes motion planning the
bottleneck for achieving reliable and adaptable autonomy. Recently, a class of
constant-time motion planners (CTMP) was introduced. These planners employ a
preprocessing phase to compute data structures that enable online planning
provably guarantee the ability to generate motion plans, potentially
sub-optimal, within a user defined time bound. This framework has been
demonstrated to be effective in a number of time-critical tasks. However,
robotic systems often have more time allotted for planning than the online
portion of CTMP requires, time that can be used to improve the solution. To
this end, we propose an anytime refinement approach that works in combination
with CTMP algorithms. Our proposed framework, as it operates as a constant time
algorithm, rapidly generates an initial solution within a user-defined time
threshold. Furthermore, functioning as an anytime algorithm, it iteratively
refines the solution's quality within the allocated time budget. This enables
our approach to strike a balance between guaranteed fast plan generation and
the pursuit of optimization over time. We support our approach by elucidating
its analytical properties, showing the convergence of the anytime component
towards optimal solutions. Additionally, we provide empirical validation
through simulation and real-world demonstrations on a 6 degree-of-freedom robot
manipulator, applied to an assembly domain.
Related papers
- TTT: A Temporal Refinement Heuristic for Tenuously Tractable Discrete Time Reachability Problems [8.696305200911455]
Reachable set computation is an important tool for analyzing control systems.
We introduce an automatic framework for performing temporal refinement.
We show that our algorithm is able to generate approximate reachable sets with a similar amount of error to the baseline approach in 20-70% less time.
arXiv Detail & Related papers (2024-07-19T15:16:25Z) - Dynamic Scheduling for Federated Edge Learning with Streaming Data [56.91063444859008]
We consider a Federated Edge Learning (FEEL) system where training data are randomly generated over time at a set of distributed edge devices with long-term energy constraints.
Due to limited communication resources and latency requirements, only a subset of devices is scheduled for participating in the local training process in every iteration.
arXiv Detail & Related papers (2023-05-02T07:41:16Z) - Distributed Allocation and Scheduling of Tasks with Cross-Schedule
Dependencies for Heterogeneous Multi-Robot Teams [2.294915015129229]
We present a distributed task allocation and scheduling algorithm for missions where the tasks of different robots are tightly coupled with temporal and precedence constraints.
An application of the planning procedure to a practical use case of a greenhouse maintained by a multi-robot system is given.
arXiv Detail & Related papers (2021-09-07T13:44:28Z) - Anytime Stochastic Task and Motion Policies [12.72186877599064]
We present a new approach for integrated task and motion planning in settings.
Our algorithm is probabilistically complete and can compute feasible solution policies in an anytime fashion.
arXiv Detail & Related papers (2021-08-28T00:23:39Z) - SABER: Data-Driven Motion Planner for Autonomously Navigating
Heterogeneous Robots [112.2491765424719]
We present an end-to-end online motion planning framework that uses a data-driven approach to navigate a heterogeneous robot team towards a global goal.
We use model predictive control (SMPC) to calculate control inputs that satisfy robot dynamics, and consider uncertainty during obstacle avoidance with chance constraints.
recurrent neural networks are used to provide a quick estimate of future state uncertainty considered in the SMPC finite-time horizon solution.
A Deep Q-learning agent is employed to serve as a high-level path planner, providing the SMPC with target positions that move the robots towards a desired global goal.
arXiv Detail & Related papers (2021-08-03T02:56:21Z) - Efficient Temporal Piecewise-Linear Numeric Planning with Lazy
Consistency Checking [4.834203844100679]
We propose a set of techniques that allow the planner to compute LP consistency checks lazily where possible.
We also propose an algorithm to perform duration-dependent goal checking more selectively.
The resultant planner is not only more efficient, but outperforms most state-of-the-art temporal-numeric and hybrid planners.
arXiv Detail & Related papers (2021-05-21T07:36:54Z) - Better than the Best: Gradient-based Improper Reinforcement Learning for
Network Scheduling [60.48359567964899]
We consider the problem of scheduling in constrained queueing networks with a view to minimizing packet delay.
We use a policy gradient based reinforcement learning algorithm that produces a scheduler that performs better than the available atomic policies.
arXiv Detail & Related papers (2021-05-01T10:18:34Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z) - Jump Operator Planning: Goal-Conditioned Policy Ensembles and Zero-Shot
Transfer [71.44215606325005]
We propose a novel framework called Jump-Operator Dynamic Programming for quickly computing solutions within a super-exponential space of sequential sub-goal tasks.
This approach involves controlling over an ensemble of reusable goal-conditioned polices functioning as temporally extended actions.
We then identify classes of objective functions on this subspace whose solutions are invariant to the grounding, resulting in optimal zero-shot transfer.
arXiv Detail & Related papers (2020-07-06T05:13:20Z) - Online Reinforcement Learning Control by Direct Heuristic Dynamic
Programming: from Time-Driven to Event-Driven [80.94390916562179]
Time-driven learning refers to the machine learning method that updates parameters in a prediction model continuously as new data arrives.
It is desirable to prevent the time-driven dHDP from updating due to insignificant system event such as noise.
We show how the event-driven dHDP algorithm works in comparison to the original time-driven dHDP.
arXiv Detail & Related papers (2020-06-16T05:51:25Z) - Trajectory Optimization for Nonlinear Multi-Agent Systems using
Decentralized Learning Model Predictive Control [5.2647625557619815]
We present a decentralized minimum-time trajectory optimization scheme based on learning model predictive control for multi-agent systems with nonlinear decoupled dynamics and coupled state constraints.
Our framework results in a decentralized controller, which requires no communication between agents over each iteration of task execution, and guarantees persistent feasibility, finite-time closed-loop convergence, and non-decreasing performance of the global system over task iterations.
arXiv Detail & Related papers (2020-04-02T23:04:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.