Robust Multi-object Matching via Iterative Reweighting of the Graph
Connection Laplacian
- URL: http://arxiv.org/abs/2006.06658v2
- Date: Sat, 24 Oct 2020 20:54:12 GMT
- Title: Robust Multi-object Matching via Iterative Reweighting of the Graph
Connection Laplacian
- Authors: Yunpeng Shi, Shaohan Li and Gilad Lerman
- Abstract summary: We first clarify serious limitations of current methods as well as the inappropriateness of the standard iteratively reweighted least squares procedure.
In view of these limitations, we suggest a novel and more reliable iterative reweighting strategy that incorporates information from higher-order neighborhoods.
We demonstrate the superior performance of our procedure over state-of-the-art methods using both synthetic and real datasets.
- Score: 15.813217907813778
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose an efficient and robust iterative solution to the multi-object
matching problem. We first clarify serious limitations of current methods as
well as the inappropriateness of the standard iteratively reweighted least
squares procedure. In view of these limitations, we suggest a novel and more
reliable iterative reweighting strategy that incorporates information from
higher-order neighborhoods by exploiting the graph connection Laplacian. We
demonstrate the superior performance of our procedure over state-of-the-art
methods using both synthetic and real datasets.
Related papers
- Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Iterative Reweighted Least Squares Networks With Convergence Guarantees
for Solving Inverse Imaging Problems [12.487990897680422]
We present a novel optimization strategy for image reconstruction tasks under analysis-based image regularization.
We parameterize such regularizers using potential functions that correspond to weighted extensions of the $ell_pp$-vector and $mathcalS_pp$ Schatten-matrix quasi-norms.
We show that thanks to the convergence guarantees of our proposed minimization strategy, such optimization can be successfully performed with a memory-efficient implicit back-propagation scheme.
arXiv Detail & Related papers (2023-08-10T17:59:46Z) - Towards Practical Robustness Auditing for Linear Regression [8.9598796481325]
We investigate algorithms to find or disprove the existence of small subsets of a dataset.
We show that these methods largely outperform the state of the art and provide a useful robustness check for regression problems in a few dimensions.
We make some headway on this challenge via a spectral algorithm using ideas drawn from recent innovations in algorithmic robust statistics.
arXiv Detail & Related papers (2023-07-30T20:47:36Z) - Re-basin via implicit Sinkhorn differentiation [9.827002225566073]
We propose a Sinkhorn re-basin network with the ability to obtain the transportation plan that better suits a given objective.
Unlike the current state-of-art, our method is differentiable and, therefore, easy to adapt to any task within the deep learning domain.
arXiv Detail & Related papers (2022-12-22T21:25:06Z) - Vector-Valued Least-Squares Regression under Output Regularity
Assumptions [73.99064151691597]
We propose and analyse a reduced-rank method for solving least-squares regression problems with infinite dimensional output.
We derive learning bounds for our method, and study under which setting statistical performance is improved in comparison to full-rank method.
arXiv Detail & Related papers (2022-11-16T15:07:00Z) - Shuffled linear regression through graduated convex relaxation [12.614901374282868]
The shuffled linear regression problem aims to recover linear relationships in datasets where the correspondence between input and output is unknown.
This problem arises in a wide range of applications including survey data.
We propose a novel optimization algorithm for shuffled linear regression based on a posterior-maximizing objective function.
arXiv Detail & Related papers (2022-09-30T17:33:48Z) - Solving weakly supervised regression problem using low-rank manifold
regularization [77.34726150561087]
We solve a weakly supervised regression problem.
Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources.
In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
arXiv Detail & Related papers (2021-04-13T23:21:01Z) - Progressive Batching for Efficient Non-linear Least Squares [31.082253632197023]
Most improvements of the basic Gauss-Newton tackle convergence guarantees or leverage the sparsity of the underlying problem structure for computational speedup.
Our work borrows ideas from both machine learning and statistics, and we present an approach for non-linear least-squares that guarantees convergence while at the same time significantly reduces the required amount of computation.
arXiv Detail & Related papers (2020-10-21T13:00:04Z) - Semi-Supervised Learning with Meta-Gradient [123.26748223837802]
We propose a simple yet effective meta-learning algorithm in semi-supervised learning.
We find that the proposed algorithm performs favorably against state-of-the-art methods.
arXiv Detail & Related papers (2020-07-08T08:48:56Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - Differentiable Causal Discovery from Interventional Data [141.41931444927184]
We propose a theoretically-grounded method based on neural networks that can leverage interventional data.
We show that our approach compares favorably to the state of the art in a variety of settings.
arXiv Detail & Related papers (2020-07-03T15:19:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.