Escaping Poor Local Minima in Large Scale Robust Estimation
- URL: http://arxiv.org/abs/2102.10928v1
- Date: Mon, 22 Feb 2021 11:58:29 GMT
- Title: Escaping Poor Local Minima in Large Scale Robust Estimation
- Authors: Huu Le and Christopher Zach
- Abstract summary: We introduce two novel approaches for robust parameter estimation.
The first algorithm uses an adaptive kernel scaling strategy that enjoys a strong ability to escape poor minima.
The second algorithm combines a generalized Majorization Minimization framework with the half-quadratic lifting formulation to obtain a simple yet efficient solver.
- Score: 41.304283715031204
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Robust parameter estimation is a crucial task in several 3D computer vision
pipelines such as Structure from Motion (SfM). State-of-the-art algorithms for
robust estimation, however, still suffer from difficulties in converging to
satisfactory solutions due to the presence of many poor local minima or flat
regions in the optimization landscapes. In this paper, we introduce two novel
approaches for robust parameter estimation. The first algorithm utilizes the
Filter Method (FM), which is a framework for constrained optimization allowing
great flexibility in algorithmic choices, to derive an adaptive kernel scaling
strategy that enjoys a strong ability to escape poor minima and achieves fast
convergence rates. Our second algorithm combines a generalized Majorization
Minimization (GeMM) framework with the half-quadratic lifting formulation to
obtain a simple yet efficient solver for robust estimation. We empirically show
that both proposed approaches show encouraging capability on avoiding poor
local minima and achieve competitive results compared to existing state-of-the
art robust fitting algorithms.
Related papers
- Alternating Minimization Schemes for Computing Rate-Distortion-Perception Functions with $f$-Divergence Perception Constraints [10.564071872770146]
We study the computation of the rate-distortion-perception function (RDPF) for discrete memoryless sources.
We characterize the optimal parametric solutions.
We provide sufficient conditions on the distortion and the perception constraints.
arXiv Detail & Related papers (2024-08-27T12:50:12Z) - Efficient Model-Free Exploration in Low-Rank MDPs [76.87340323826945]
Low-Rank Markov Decision Processes offer a simple, yet expressive framework for RL with function approximation.
Existing algorithms are either (1) computationally intractable, or (2) reliant upon restrictive statistical assumptions.
We propose the first provably sample-efficient algorithm for exploration in Low-Rank MDPs.
arXiv Detail & Related papers (2023-07-08T15:41:48Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - Data-Driven Minimax Optimization with Expectation Constraints [9.373649142701803]
We propose a class of efficient primal-dual algorithms to tackle the minimax expectation-constrained problem.
We show that our algorithms converge at the optimal rate of $mathcalO(frac1sqrtN)$.
arXiv Detail & Related papers (2022-02-16T05:23:27Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method [64.15649345392822]
We introduce a framework for designing primal methods under the decentralized optimization setting where local functions are smooth and strongly convex.
Our approach consists of approximately solving a sequence of sub-problems induced by the accelerated augmented Lagrangian method.
When coupled with accelerated gradient descent, our framework yields a novel primal algorithm whose convergence rate is optimal and matched by recently derived lower bounds.
arXiv Detail & Related papers (2020-06-11T18:49:06Z) - Second-Order Guarantees in Centralized, Federated and Decentralized
Nonconvex Optimization [64.26238893241322]
Simple algorithms have been shown to lead to good empirical results in many contexts.
Several works have pursued rigorous analytical justification for studying non optimization problems.
A key insight in these analyses is that perturbations play a critical role in allowing local descent algorithms.
arXiv Detail & Related papers (2020-03-31T16:54:22Z) - A Graduated Filter Method for Large Scale Robust Estimation [32.08441889054456]
We introduce a novel solver for robust estimation that possesses a strong ability to escape poor local minima.
Our algorithm is built upon the graduated-of-the-art methods to solve problems having many poor local minima.
arXiv Detail & Related papers (2020-03-20T02:51:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.