A Revisit of the Normalized Eight-Point Algorithm and A Self-Supervised
Deep Solution
- URL: http://arxiv.org/abs/2304.10771v3
- Date: Tue, 16 Jan 2024 01:58:56 GMT
- Title: A Revisit of the Normalized Eight-Point Algorithm and A Self-Supervised
Deep Solution
- Authors: Bin Fan, Yuchao Dai, Yongduek Seo, Mingyi He
- Abstract summary: We revisit the normalized eight-point algorithm and present the existence of different and better normalization algorithms.
We introduce a deep convolutional neural network with a self-supervised learning strategy for normalization.
Our learning-based normalization module can be integrated with both traditional (e.g., RANSAC) and deep learning frameworks.
- Score: 45.10109739084541
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The normalized eight-point algorithm has been widely viewed as the
cornerstone in two-view geometry computation, where the seminal Hartley's
normalization has greatly improved the performance of the direct linear
transformation algorithm. A natural question is, whether there exists and how
to find other normalization methods that may further improve the performance as
per each input sample. In this paper, we provide a novel perspective and
propose two contributions to this fundamental problem: 1) we revisit the
normalized eight-point algorithm and make a theoretical contribution by
presenting the existence of different and better normalization algorithms; 2)
we introduce a deep convolutional neural network with a self-supervised
learning strategy for normalization. Given eight pairs of correspondences, our
network directly predicts the normalization matrices, thus learning to
normalize each input sample. Our learning-based normalization module can be
integrated with both traditional (e.g., RANSAC) and deep learning frameworks
(affording good interpretability) with minimal effort. Extensive experiments on
both synthetic and real images demonstrate the effectiveness of our proposed
approach.
Related papers
- Bayesian Design Principles for Frequentist Sequential Learning [11.421942894219901]
We develop a theory to optimize the frequentist regret for sequential learning problems.
We propose a novel optimization approach to generate "algorithmic beliefs" at each round.
We present a novel algorithm for multi-armed bandits that achieves the "best-of-all-worlds" empirical performance.
arXiv Detail & Related papers (2023-10-01T22:17:37Z) - A Compound Gaussian Least Squares Algorithm and Unrolled Network for
Linear Inverse Problems [1.283555556182245]
This paper develops two new approaches to solving linear inverse problems.
The first is an iterative algorithm that minimizes a regularized least squares objective function.
The second is a deep neural network that corresponds to an "unrolling" or "unfolding" of the iterative algorithm.
arXiv Detail & Related papers (2023-05-18T17:05:09Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Learning Non-Vacuous Generalization Bounds from Optimization [8.294831479902658]
We present a simple yet non-vacuous generalization bound from the optimization perspective.
We achieve this goal by leveraging that the hypothesis set accessed by gradient algorithms is essentially fractal-like.
Numerical studies demonstrate that our approach is able to yield plausible generalization guarantees for modern neural networks.
arXiv Detail & Related papers (2022-06-09T08:59:46Z) - Path Regularization: A Convexity and Sparsity Inducing Regularization
for Parallel ReLU Networks [75.33431791218302]
We study the training problem of deep neural networks and introduce an analytic approach to unveil hidden convexity in the optimization landscape.
We consider a deep parallel ReLU network architecture, which also includes standard deep networks and ResNets as its special cases.
arXiv Detail & Related papers (2021-10-18T18:00:36Z) - SUPER-ADAM: Faster and Universal Framework of Adaptive Gradients [99.13839450032408]
It is desired to design a universal framework for adaptive algorithms to solve general problems.
In particular, our novel framework provides adaptive methods under non convergence support for setting.
arXiv Detail & Related papers (2021-06-15T15:16:28Z) - Meta-Regularization: An Approach to Adaptive Choice of the Learning Rate
in Gradient Descent [20.47598828422897]
We propose textit-Meta-Regularization, a novel approach for the adaptive choice of the learning rate in first-order descent methods.
Our approach modifies the objective function by adding a regularization term, and casts the joint process parameters.
arXiv Detail & Related papers (2021-04-12T13:13:34Z) - Normalization Techniques in Training DNNs: Methodology, Analysis and
Application [111.82265258916397]
Normalization techniques are essential for accelerating the training and improving the generalization of deep neural networks (DNNs)
This paper reviews and comments on the past, present and future of normalization methods in the context of training.
arXiv Detail & Related papers (2020-09-27T13:06:52Z) - Optimization Theory for ReLU Neural Networks Trained with Normalization
Layers [82.61117235807606]
The success of deep neural networks in part due to the use of normalization layers.
Our analysis shows how the introduction of normalization changes the landscape and can enable faster activation.
arXiv Detail & Related papers (2020-06-11T23:55:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.