A Minimal Subset Approach for Efficient and Scalable Loop Closure
- URL: http://arxiv.org/abs/2501.01791v1
- Date: Fri, 03 Jan 2025 12:48:01 GMT
- Title: A Minimal Subset Approach for Efficient and Scalable Loop Closure
- Authors: Nikolaos Stathoulopoulos, Christoforos Kanellakis, George Nikolakopoulos,
- Abstract summary: Loop detection in large-scale and long-term missions can be computationally demanding.
Keyframe sampling mitigates this by reducing the number of frames stored and processed in the back-end system.
In this article, we address the gap in optimized sampling for the combined problem of pose graph optimization and loop closure detection.
- Score: 7.066382982173528
- License:
- Abstract: Loop closure detection in large-scale and long-term missions can be computationally demanding due to the need to identify, verify, and process numerous candidate pairs to establish edge connections for the pose graph optimization. Keyframe sampling mitigates this by reducing the number of frames stored and processed in the back-end system. In this article, we address the gap in optimized keyframe sampling for the combined problem of pose graph optimization and loop closure detection. Our Minimal Subset Approach (MSA) employs an optimization strategy with two key factors, redundancy minimization and information preservation, within a sliding window framework to efficiently reduce redundant keyframes, while preserving essential information. This method delivers comparable performance to baseline approaches, while enhancing scalability and reducing computational overhead. Finally, we evaluate MSA on relevant publicly available datasets, showcasing that it consistently performs across a wide range of environments, without requiring any manual parameter tuning.
Related papers
- Gumbel Reranking: Differentiable End-to-End Reranker Optimization [61.16471123356738]
RAG systems rely on rerankers to identify relevant documents.
fine-tuning these models remains challenging due to the scarcity of annotated query-document pairs.
We propose Gumbel Reranking, an end-to-end training framework for rerankers aimed at minimizing the training-inference gap.
arXiv Detail & Related papers (2025-02-16T13:23:39Z) - Why Sample Space Matters: Keyframe Sampling Optimization for LiDAR-based Place Recognition [6.468510459310326]
We introduce the concept of sample space and propose a novel sampling approach for LiDAR-based place recognition.
Our approach demonstrates robust performance across diverse datasets, with the ability to adapt seamlessly from indoor to outdoor scenarios.
arXiv Detail & Related papers (2024-10-03T16:29:47Z) - Quantization Avoids Saddle Points in Distributed Optimization [1.579622195923387]
Distributed non optimization underpins key functionalities of numerous distributed systems.
The aim of this paper is to prove that it can effectively escape saddle points convergence to a second-order stationary point convergence.
With an easily adjustable quantization, the approach allows a user control to aggressively reduce communication overhead.
arXiv Detail & Related papers (2024-03-15T15:58:20Z) - Large-scale Fully-Unsupervised Re-Identification [78.47108158030213]
We propose two strategies to learn from large-scale unlabeled data.
The first strategy performs a local neighborhood sampling to reduce the dataset size in each without violating neighborhood relationships.
A second strategy leverages a novel Re-Ranking technique, which has a lower time upper bound complexity and reduces the memory complexity from O(n2) to O(kn) with k n.
arXiv Detail & Related papers (2023-07-26T16:19:19Z) - A Meta-Learning Based Precoder Optimization Framework for Rate-Splitting
Multiple Access [53.191806757701215]
We propose the use of a meta-learning based precoder optimization framework to directly optimize the Rate-Splitting Multiple Access (RSMA) precoders with partial Channel State Information at the Transmitter (CSIT)
By exploiting the overfitting of the compact neural network to maximize the explicit Average Sum-Rate (ASR) expression, we effectively bypass the need for any other training data while minimizing the total running time.
Numerical results reveal that the meta-learning based solution achieves similar ASR performance to conventional precoder optimization in medium-scale scenarios, and significantly outperforms sub-optimal low complexity precoder algorithms in the large-scale
arXiv Detail & Related papers (2023-07-17T20:31:41Z) - DESTRESS: Computation-Optimal and Communication-Efficient Decentralized
Nonconvex Finite-Sum Optimization [43.31016937305845]
Internet-of-things, networked sensing, autonomous systems and federated learning call for decentralized algorithms for finite-sum optimizations.
We develop DEcentralized STochastic REcurSive methodDESTRESS for non finite-sum optimization.
Detailed theoretical and numerical comparisons show that DESTRESS improves upon prior decentralized algorithms.
arXiv Detail & Related papers (2021-10-04T03:17:41Z) - Outlier-Robust Sparse Estimation via Non-Convex Optimization [73.18654719887205]
We explore the connection between high-dimensional statistics and non-robust optimization in the presence of sparsity constraints.
We develop novel and simple optimization formulations for these problems.
As a corollary, we obtain that any first-order method that efficiently converges to station yields an efficient algorithm for these tasks.
arXiv Detail & Related papers (2021-09-23T17:38:24Z) - JUMBO: Scalable Multi-task Bayesian Optimization using Offline Data [86.8949732640035]
We propose JUMBO, an MBO algorithm that sidesteps limitations by querying additional data.
We show that it achieves no-regret under conditions analogous to GP-UCB.
Empirically, we demonstrate significant performance improvements over existing approaches on two real-world optimization problems.
arXiv Detail & Related papers (2021-06-02T05:03:38Z) - Escaping Poor Local Minima in Large Scale Robust Estimation [41.304283715031204]
We introduce two novel approaches for robust parameter estimation.
The first algorithm uses an adaptive kernel scaling strategy that enjoys a strong ability to escape poor minima.
The second algorithm combines a generalized Majorization Minimization framework with the half-quadratic lifting formulation to obtain a simple yet efficient solver.
arXiv Detail & Related papers (2021-02-22T11:58:29Z) - Pose Correction Algorithm for Relative Frames between Keyframes in SLAM [20.579218922577244]
Relative frame poses betweens have typically been sacrificed for a faster algorithm to achieve online applications.
This paper proposes a novel algorithm to correct the relative frames between landmarks after thes have been updated.
The proposed algorithm is designed to be easily integrable to existing-based SLAM systems.
arXiv Detail & Related papers (2020-09-18T09:59:10Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.