CoverLib: Classifiers-equipped Experience Library by Iterative Problem Distribution Coverage Maximization for Domain-tuned Motion Planning
- URL: http://arxiv.org/abs/2405.02968v2
- Date: Tue, 7 May 2024 09:36:54 GMT
- Title: CoverLib: Classifiers-equipped Experience Library by Iterative Problem Distribution Coverage Maximization for Domain-tuned Motion Planning
- Authors: Hirokazu Ishida, Naoki Hiraoka, Kei Okada, Masayuki Inaba,
- Abstract summary: CoverLib iteratively adds an experience-classifier-pair to the library.
It selects the next experience based on its ability to effectively cover the uncovered region.
It achieves both fast planning and high success rates over the problem domain.
- Score: 14.580628884001593
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Library-based methods are known to be very effective for fast motion planning by adapting an experience retrieved from a precomputed library. This article presents CoverLib, a principled approach for constructing and utilizing such a library. CoverLib iteratively adds an experience-classifier-pair to the library, where each classifier corresponds to an adaptable region of the experience within the problem space. This iterative process is an active procedure, as it selects the next experience based on its ability to effectively cover the uncovered region. During the query phase, these classifiers are utilized to select an experience that is expected to be adaptable for a given problem. Experimental results demonstrate that CoverLib effectively mitigates the trade-off between plannability and speed observed in global (e.g. sampling-based) and local (e.g. optimization-based) methods. As a result, it achieves both fast planning and high success rates over the problem domain. Moreover, due to its adaptation-algorithm-agnostic nature, CoverLib seamlessly integrates with various adaptation methods, including nonlinear programming-based and sampling-based algorithms.
Related papers
- Adaptive Experimentation When You Can't Experiment [55.86593195947978]
This paper introduces the emphconfounded pure exploration transductive linear bandit (textttCPET-LB) problem.
Online services can employ a properly randomized encouragement that incentivizes users toward a specific treatment.
arXiv Detail & Related papers (2024-06-15T20:54:48Z) - Prompt Optimization with EASE? Efficient Ordering-aware Automated Selection of Exemplars [66.823588073584]
Large language models (LLMs) have shown impressive capabilities in real-world applications.
The quality of these exemplars in the prompt greatly impacts performance.
Existing methods fail to adequately account for the impact of exemplar ordering on the performance.
arXiv Detail & Related papers (2024-05-25T08:23:05Z) - Experiment Planning with Function Approximation [49.50254688629728]
We study the problem of experiment planning with function approximation in contextual bandit problems.
We propose two experiment planning strategies compatible with function approximation.
We show that a uniform sampler achieves competitive optimality rates in the setting where the number of actions is small.
arXiv Detail & Related papers (2024-01-10T14:40:23Z) - LLM Interactive Optimization of Open Source Python Libraries -- Case
Studies and Generalization [0.0]
This paper presents methodologically stringent case studies applied to well-known open source python libraries pillow and numpy.
We find that contemporary LLM ChatGPT-4 is surprisingly adept at optimizing energy and compute efficiency.
We conclude that LLMs are a promising tool for code optimization in open source libraries, but that the human expert in the loop is essential for success.
arXiv Detail & Related papers (2023-12-08T13:52:57Z) - Realistic Unsupervised CLIP Fine-tuning with Universal Entropy Optimization [101.08992036691673]
This paper explores a realistic unsupervised fine-tuning scenario, considering the presence of out-of-distribution samples from unknown classes.
In particular, we focus on simultaneously enhancing out-of-distribution detection and the recognition of instances associated with known classes.
We present a simple, efficient, and effective approach called Universal Entropy Optimization (UEO)
arXiv Detail & Related papers (2023-08-24T16:47:17Z) - Provably Efficient Learning in Partially Observable Contextual Bandit [4.910658441596583]
We show how causal bounds can be applied to improving classical bandit algorithms.
This research has the potential to enhance the performance of contextual bandit agents in real-world applications.
arXiv Detail & Related papers (2023-08-07T13:24:50Z) - LibAUC: A Deep Learning Library for X-Risk Optimization [43.32145407575245]
This paper introduces the award-winning deep learning (DL) library called LibAUC.
LibAUC implements state-of-the-art algorithms towards optimizing a family of risk functions named X-risks.
arXiv Detail & Related papers (2023-06-05T17:43:46Z) - Multi-Task Option Learning and Discovery for Stochastic Path Planning [27.384742641275228]
This paper addresses the problem of reliably and efficiently solving broad classes of long-horizon path planning problems.
Our approach computes useful options with policies as well as high-level paths that compose the discovered options.
We show that this approach yields strong guarantees of executability and solvability.
arXiv Detail & Related papers (2022-09-30T19:57:52Z) - Adaptive Learning for the Resource-Constrained Classification Problem [14.19197444541245]
Resource-constrained classification tasks are common in real-world applications such as allocating tests for disease diagnosis.
We design an adaptive learning approach that considers resource constraints and learning jointly by iteratively fine-tuning misclassification costs.
We envision the adaptive learning approach as an important addition to the repertoire of techniques for handling resource-constrained classification problems.
arXiv Detail & Related papers (2022-07-19T11:00:33Z) - Semi-supervised Batch Active Learning via Bilevel Optimization [89.37476066973336]
We formulate our approach as a data summarization problem via bilevel optimization.
We show that our method is highly effective in keyword detection tasks in the regime when only few labeled samples are available.
arXiv Detail & Related papers (2020-10-19T16:53:24Z) - Bayesian active learning for production, a systematic study and a
reusable library [85.32971950095742]
In this paper, we analyse the main drawbacks of current active learning techniques.
We do a systematic study on the effects of the most common issues of real-world datasets on the deep active learning process.
We derive two techniques that can speed up the active learning loop such as partial uncertainty sampling and larger query size.
arXiv Detail & Related papers (2020-06-17T14:51:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.