Model-Based Parameter Optimization for Ground Texture Based Localization
Methods
- URL: http://arxiv.org/abs/2109.01559v1
- Date: Fri, 3 Sep 2021 14:29:36 GMT
- Title: Model-Based Parameter Optimization for Ground Texture Based Localization
Methods
- Authors: Jan Fabian Schmid, Stephan F. Simon, Rudolf Mester
- Abstract summary: A promising approach to accurate positioning of robots is ground texture based localization.
We deriving a prediction model for localization performance, which requires only a small collection of sample images of an application area.
- Score: 16.242924916178286
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A promising approach to accurate positioning of robots is ground texture
based localization. It is based on the observation that visual features of
ground images enable fingerprint-like place recognition. We tackle the issue of
efficient parametrization of such methods, deriving a prediction model for
localization performance, which requires only a small collection of sample
images of an application area. In a first step, we examine whether the model
can predict the effects of changing one of the most important parameters of
feature-based localization methods: the number of extracted features. We
examine two localization methods, and in both cases our evaluation shows that
the predictions are sufficiently accurate. Since this model can be used to find
suitable values for any parameter, we then present a holistic parameter
optimization framework, which finds suitable texture-specific parameter
configurations, using only the model to evaluate the considered parameter
configurations.
Related papers
- Scaling Exponents Across Parameterizations and Optimizers [94.54718325264218]
We propose a new perspective on parameterization by investigating a key assumption in prior work.
Our empirical investigation includes tens of thousands of models trained with all combinations of threes.
We find that the best learning rate scaling prescription would often have been excluded by the assumptions in prior work.
arXiv Detail & Related papers (2024-07-08T12:32:51Z) - Estimating Material Properties of Interacting Objects Using Sum-GP-UCB [17.813871065276636]
We present a Bayesian optimization approach to identifying the material property parameters of objects based on a set of observations.
We show that our method can effectively perform incremental learning without resetting the rewards of the gathered observations.
arXiv Detail & Related papers (2023-10-18T07:16:06Z) - On the Effectiveness of Parameter-Efficient Fine-Tuning [79.6302606855302]
Currently, many research works propose to only fine-tune a small portion of the parameters while keeping most of the parameters shared across different tasks.
We show that all of the methods are actually sparse fine-tuned models and conduct a novel theoretical analysis of them.
Despite the effectiveness of sparsity grounded by our theory, it still remains an open problem of how to choose the tunable parameters.
arXiv Detail & Related papers (2022-11-28T17:41:48Z) - Exploring validation metrics for offline model-based optimisation with
diffusion models [50.404829846182764]
In model-based optimisation (MBO) we are interested in using machine learning to design candidates that maximise some measure of reward with respect to a black box function called the (ground truth) oracle.
While an approximation to the ground oracle can be trained and used in place of it during model validation to measure the mean reward over generated candidates, the evaluation is approximate and vulnerable to adversarial examples.
This is encapsulated under our proposed evaluation framework which is also designed to measure extrapolation.
arXiv Detail & Related papers (2022-11-19T16:57:37Z) - RbX: Region-based explanations of prediction models [69.3939291118954]
Region-based explanations (RbX) is a model-agnostic method to generate local explanations of scalar outputs from a black-box prediction model.
RbX is guaranteed to satisfy a "sparsity axiom," which requires that features which do not enter into the prediction model are assigned zero importance.
arXiv Detail & Related papers (2022-10-17T03:38:06Z) - Parameter-efficient Model Adaptation for Vision Transformers [45.3460867776953]
We study parameter-efficient model adaptation strategies for vision transformers on the image classification task.
We propose a parameter-efficient model adaptation framework, which first selects submodules by measuring local intrinsic dimensions.
Our method performs the best in terms of the tradeoff between accuracy and parameter efficiency across 20 image classification datasets.
arXiv Detail & Related papers (2022-03-29T05:30:09Z) - A Model for Multi-View Residual Covariances based on Perspective
Deformation [88.21738020902411]
We derive a model for the covariance of the visual residuals in multi-view SfM, odometry and SLAM setups.
We validate our model with synthetic and real data and integrate it into photometric and feature-based Bundle Adjustment.
arXiv Detail & Related papers (2022-02-01T21:21:56Z) - Parameter Tuning Strategies for Metaheuristic Methods Applied to
Discrete Optimization of Structural Design [0.0]
This paper presents several strategies to tune the parameters of metaheuristic methods for (discrete) design optimization of reinforced concrete (RC) structures.
A novel utility metric is proposed, based on the area under the average performance curve.
arXiv Detail & Related papers (2021-10-12T17:34:39Z) - Optimizing model-agnostic Random Subspace ensembles [5.680512932725364]
We present a model-agnostic ensemble approach for supervised learning.
The proposed approach alternates between learning an ensemble of models using a parametric version of the Random Subspace approach.
We show the good performance of the proposed approach, both in terms of prediction and feature ranking, on simulated and real-world datasets.
arXiv Detail & Related papers (2021-09-07T13:58:23Z) - Condensing Two-stage Detection with Automatic Object Key Part Discovery [87.1034745775229]
Two-stage object detectors generally require excessively large models for their detection heads to achieve high accuracy.
We propose that the model parameters of two-stage detection heads can be condensed and reduced by concentrating on object key parts.
Our proposed technique consistently maintains original performance while waiving around 50% of the model parameters of common two-stage detection heads.
arXiv Detail & Related papers (2020-06-10T01:20:47Z) - Black-Box Saliency Map Generation Using Bayesian Optimisation [5.414308305392763]
Saliency maps are often used in computer vision to provide intuitive interpretations of what input regions a model has used to produce a specific prediction.
This work proposes an approach for saliency map generation for black-box models, where no access to model parameters is available.
arXiv Detail & Related papers (2020-01-30T14:39:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.