CARE: Confidence-rich Autonomous Robot Exploration using Bayesian Kernel
Inference and Optimization
- URL: http://arxiv.org/abs/2309.05200v1
- Date: Mon, 11 Sep 2023 02:30:06 GMT
- Title: CARE: Confidence-rich Autonomous Robot Exploration using Bayesian Kernel
Inference and Optimization
- Authors: Yang Xu, Ronghao Zheng, Senlin Zhang, Meiqin Liu, Shoudong Huang
- Abstract summary: We consider improving the efficiency of information-based autonomous robot exploration in unknown and complex environments.
We propose a novel lightweight information gain inference method based on Bayesian kernel inference and optimization (BKIO)
We show the desired efficiency of our proposed methods without losing exploration performance in different unstructured, cluttered environments.
- Score: 12.32946442160165
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we consider improving the efficiency of information-based
autonomous robot exploration in unknown and complex environments. We first
utilize Gaussian process (GP) regression to learn a surrogate model to infer
the confidence-rich mutual information (CRMI) of querying control actions, then
adopt an objective function consisting of predicted CRMI values and prediction
uncertainties to conduct Bayesian optimization (BO), i.e., GP-based BO (GPBO).
The trade-off between the best action with the highest CRMI value
(exploitation) and the action with high prediction variance (exploration) can
be realized. To further improve the efficiency of GPBO, we propose a novel
lightweight information gain inference method based on Bayesian kernel
inference and optimization (BKIO), achieving an approximate logarithmic
complexity without the need for training. BKIO can also infer the CRMI and
generate the best action using BO with bounded cumulative regret, which ensures
its comparable accuracy to GPBO with much higher efficiency. Extensive
numerical and real-world experiments show the desired efficiency of our
proposed methods without losing exploration performance in different
unstructured, cluttered environments. We also provide our open-source
implementation code at https://github.com/Shepherd-Gregory/BKIO-Exploration.
Related papers
- Bayesian Optimization for Hyperparameters Tuning in Neural Networks [0.0]
Bayesian Optimization is a derivative-free global optimization method suitable for black-box functions with continuous inputs and limited evaluation budgets.
This study investigates the application of BO for the hyper parameter tuning of neural networks, specifically targeting the enhancement of Convolutional Neural Networks (CNN)
Experimental outcomes reveal that BO effectively balances exploration and exploitation, converging rapidly towards optimal settings for CNN architectures.
This approach underlines the potential of BO in automating neural network tuning, contributing to improved accuracy and computational efficiency in machine learning pipelines.
arXiv Detail & Related papers (2024-10-29T09:23:24Z) - Cost-Sensitive Multi-Fidelity Bayesian Optimization with Transfer of Learning Curve Extrapolation [55.75188191403343]
We introduce utility, which is a function predefined by each user and describes the trade-off between cost and performance of BO.
We validate our algorithm on various LC datasets and found it outperform all the previous multi-fidelity BO and transfer-BO baselines we consider.
arXiv Detail & Related papers (2024-05-28T07:38:39Z) - Bigger, Regularized, Optimistic: scaling for compute and sample-efficient continuous control [1.1404490220482764]
BRO is a model-free algorithm to achieve near-optimal policies in the Dog and Humanoid tasks.
BRO achieves state-of-the-art results, significantly outperforming the leading model-based and model-free algorithms.
BRO is the first model-free algorithm to achieve near-optimal policies in the notoriously challenging Dog and Humanoid tasks.
arXiv Detail & Related papers (2024-05-25T09:53:25Z) - Reinforced In-Context Black-Box Optimization [64.25546325063272]
RIBBO is a method to reinforce-learn a BBO algorithm from offline data in an end-to-end fashion.
RIBBO employs expressive sequence models to learn the optimization histories produced by multiple behavior algorithms and tasks.
Central to our method is to augment the optimization histories with textitregret-to-go tokens, which are designed to represent the performance of an algorithm based on cumulative regret over the future part of the histories.
arXiv Detail & Related papers (2024-02-27T11:32:14Z) - Poisson Process for Bayesian Optimization [126.51200593377739]
We propose a ranking-based surrogate model based on the Poisson process and introduce an efficient BO framework, namely Poisson Process Bayesian Optimization (PoPBO)
Compared to the classic GP-BO method, our PoPBO has lower costs and better robustness to noise, which is verified by abundant experiments.
arXiv Detail & Related papers (2024-02-05T02:54:50Z) - PG-LBO: Enhancing High-Dimensional Bayesian Optimization with
Pseudo-Label and Gaussian Process Guidance [31.585328335396607]
Current mainstream methods overlook the potential of utilizing a pool of unlabeled data to construct the latent space.
We propose a novel method to effectively utilize unlabeled data with the guidance of labeled data.
Our proposed method outperforms existing VAE-BO algorithms in various optimization scenarios.
arXiv Detail & Related papers (2023-12-28T11:57:58Z) - Equation Discovery with Bayesian Spike-and-Slab Priors and Efficient Kernels [57.46832672991433]
We propose a novel equation discovery method based on Kernel learning and BAyesian Spike-and-Slab priors (KBASS)
We use kernel regression to estimate the target function, which is flexible, expressive, and more robust to data sparsity and noises.
We develop an expectation-propagation expectation-maximization algorithm for efficient posterior inference and function estimation.
arXiv Detail & Related papers (2023-10-09T03:55:09Z) - Learning Regions of Interest for Bayesian Optimization with Adaptive
Level-Set Estimation [84.0621253654014]
We propose a framework, called BALLET, which adaptively filters for a high-confidence region of interest.
We show theoretically that BALLET can efficiently shrink the search space, and can exhibit a tighter regret bound than standard BO.
arXiv Detail & Related papers (2023-07-25T09:45:47Z) - Cauchy-Schwarz Regularized Autoencoder [68.80569889599434]
Variational autoencoders (VAE) are a powerful and widely-used class of generative models.
We introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
Our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
arXiv Detail & Related papers (2021-01-06T17:36:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.