Fast geometric trim fitting using partial incremental sorting and
accumulation
- URL: http://arxiv.org/abs/2209.02034v1
- Date: Mon, 5 Sep 2022 16:14:22 GMT
- Title: Fast geometric trim fitting using partial incremental sorting and
accumulation
- Authors: Min Li and Laurent Kneip
- Abstract summary: We present an algorithmic contribution to improve the efficiency of robust trim-fitting in affected geometric regression problems.
The method relies on the quick sort algorithm, and we present two important insights.
Besides linear fitting problems, we demonstrate how the technique can be additionally applied to closed-form, non-linear energy minimization problems.
- Score: 29.115335340381765
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an algorithmic contribution to improve the efficiency of robust
trim-fitting in outlier affected geometric regression problems. The method
heavily relies on the quick sort algorithm, and we present two important
insights. First, partial sorting is sufficient for the incremental calculation
of the x-th percentile value. Second, the normal equations in linear fitting
problems may be updated incrementally by logging swap operations across the
x-th percentile boundary during sorting. Besides linear fitting problems, we
demonstrate how the technique can be additionally applied to closed-form,
non-linear energy minimization problems, thus enabling efficient trim fitting
under geometrically optimal objectives. We apply our method to two distinct
camera resectioning algorithms, and demonstrate highly efficient and reliable,
geometric trim fitting.
Related papers
- Fast sparse optimization via adaptive shrinkage [0.6226609932118122]
We develop a proximal method, based on logarithmic regularization, which turns out to be an iterative shrinkage-thresholding algorithm.
This adaptivity substantially enhances the trajectory of the algorithm, in a way that yields faster convergence.
We validate its fast convergence via numerical experiments and we discuss the performance with respect to state-of-the-art algorithms.
arXiv Detail & Related papers (2025-01-21T15:58:21Z) - Closing the Computational-Query Depth Gap in Parallel Stochastic Convex Optimization [26.36906884097317]
We develop a new parallel algorithm for minimizing Lipschitz, convex functions with a subgradient oracle.
Our result closes a gap between the best-known query depth and the best-known computational depth of parallel algorithms.
arXiv Detail & Related papers (2024-06-11T15:41:48Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Accelerated nonlinear primal-dual hybrid gradient algorithms with
applications to machine learning [0.0]
primal-dual hybrid gradient (PDHG) is a first-order method that splits convex optimization problems with saddle-point structure into smaller subproblems.
PDHG requires stepsize parameters fine-tuned for the problem at hand.
We introduce accelerated nonlinear variants of the PDHG algorithm that can achieve, for a broad class of optimization problems relevant to machine learning.
arXiv Detail & Related papers (2021-09-24T22:37:10Z) - Hybrid Trilinear and Bilinear Programming for Aligning Partially
Overlapping Point Sets [85.71360365315128]
In many applications, we need algorithms which can align partially overlapping point sets are invariant to the corresponding corresponding RPM algorithm.
We first show that the objective is a cubic bound function. We then utilize the convex envelopes of trilinear and bilinear monomial transformations to derive its lower bound.
We next develop a branch-and-bound (BnB) algorithm which only branches over the transformation variables and runs efficiently.
arXiv Detail & Related papers (2021-01-19T04:24:23Z) - Aligning Partially Overlapping Point Sets: an Inner Approximation
Algorithm [80.15123031136564]
We propose a robust method to align point sets where there is no prior information about the value of the transformation.
Our algorithm does not need regularization on transformation, and thus can handle the situation where there is no prior information about the values of the transformations.
Experimental results demonstrate the better robustness of the proposed method over state-of-the-art algorithms.
arXiv Detail & Related papers (2020-07-05T15:23:33Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z) - SONIA: A Symmetric Blockwise Truncated Optimization Algorithm [2.9923891863939938]
This work presents a new algorithm for empirical risk.
The algorithm bridges the gap between first- and second-order search methods by computing a second-order search-type update in one subspace, coupled with a scaled steepest descent step in the Theoretical complement.
arXiv Detail & Related papers (2020-06-06T19:28:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.