Explicit homography estimation improves contrastive self-supervised
learning
- URL: http://arxiv.org/abs/2101.04713v1
- Date: Tue, 12 Jan 2021 19:33:37 GMT
- Title: Explicit homography estimation improves contrastive self-supervised
learning
- Authors: David Torpey and Richard Klein
- Abstract summary: We propose a module that serves as an additional objective in the self-supervised contrastive learning paradigm.
We show how the inclusion of this module to regress the parameters of an affine transformation or homography improves both performance and learning speed.
- Score: 0.30458514384586394
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The typical contrastive self-supervised algorithm uses a similarity measure
in latent space as the supervision signal by contrasting positive and negative
images directly or indirectly. Although the utility of self-supervised
algorithms has improved recently, there are still bottlenecks hindering their
widespread use, such as the compute needed. In this paper, we propose a module
that serves as an additional objective in the self-supervised contrastive
learning paradigm. We show how the inclusion of this module to regress the
parameters of an affine transformation or homography, in addition to the
original contrastive objective, improves both performance and learning speed.
Importantly, we ensure that this module does not enforce invariance to the
various components of the affine transform, as this is not always ideal. We
demonstrate the effectiveness of the additional objective on two recent,
popular self-supervised algorithms. We perform an extensive experimental
analysis of the proposed method and show an improvement in performance for all
considered datasets. Further, we find that although both the general homography
and affine transformation are sufficient to improve performance and
convergence, the affine transformation performs better in all cases.
Related papers
- Affine transformation estimation improves visual self-supervised
learning [4.40560654491339]
We show that adding a module to constrain the representations to be predictive of an affine transformation improves the performance and efficiency of the learning process.
We perform experiments in various modern self-supervised models and see a performance improvement in all cases.
arXiv Detail & Related papers (2024-02-14T10:32:58Z) - Deep Hashing via Householder Quantization [3.106177436374861]
Hashing is at the heart of large-scale image similarity search.
A common solution is to employ loss functions that combine a similarity learning term and a quantization penalty term.
We propose an alternative quantization strategy that decomposes the learning problem in two stages.
arXiv Detail & Related papers (2023-11-07T18:47:28Z) - Object Representations as Fixed Points: Training Iterative Refinement
Algorithms with Implicit Differentiation [88.14365009076907]
Iterative refinement is a useful paradigm for representation learning.
We develop an implicit differentiation approach that improves the stability and tractability of training.
arXiv Detail & Related papers (2022-07-02T10:00:35Z) - Towards a Unified Approach to Homography Estimation Using Image Features
and Pixel Intensities [0.0]
The homography matrix is a key component in various vision-based robotic tasks.
Traditionally, homography estimation algorithms are classified into feature- or intensity-based.
This paper proposes a new hybrid method that unifies both classes into a single nonlinear optimization procedure.
arXiv Detail & Related papers (2022-02-20T02:47:05Z) - Revisiting Consistency Regularization for Semi-Supervised Learning [80.28461584135967]
We propose an improved consistency regularization framework by a simple yet effective technique, FeatDistLoss.
Experimental results show that our model defines a new state of the art for various datasets and settings.
arXiv Detail & Related papers (2021-12-10T20:46:13Z) - Harnessing Heterogeneity: Learning from Decomposed Feedback in Bayesian
Modeling [68.69431580852535]
We introduce a novel GP regression to incorporate the subgroup feedback.
Our modified regression has provably lower variance -- and thus a more accurate posterior -- compared to previous approaches.
We execute our algorithm on two disparate social problems.
arXiv Detail & Related papers (2021-07-07T03:57:22Z) - From Canonical Correlation Analysis to Self-supervised Graph Neural
Networks [99.44881722969046]
We introduce a conceptually simple yet effective model for self-supervised representation learning with graph data.
We optimize an innovative feature-level objective inspired by classical Canonical Correlation Analysis.
Our method performs competitively on seven public graph datasets.
arXiv Detail & Related papers (2021-06-23T15:55:47Z) - An Adaptive Framework for Learning Unsupervised Depth Completion [59.17364202590475]
We present a method to infer a dense depth map from a color image and associated sparse depth measurements.
We show that regularization and co-visibility are related via the fitness of the model to data and can be unified into a single framework.
arXiv Detail & Related papers (2021-06-06T02:27:55Z) - Self-supervised Augmentation Consistency for Adapting Semantic
Segmentation [56.91850268635183]
We propose an approach to domain adaptation for semantic segmentation that is both practical and highly accurate.
We employ standard data augmentation techniques $-$ photometric noise, flipping and scaling $-$ and ensure consistency of the semantic predictions.
We achieve significant improvements of the state-of-the-art segmentation accuracy after adaptation, consistent both across different choices of the backbone architecture and adaptation scenarios.
arXiv Detail & Related papers (2021-04-30T21:32:40Z) - Meta-Regularization: An Approach to Adaptive Choice of the Learning Rate
in Gradient Descent [20.47598828422897]
We propose textit-Meta-Regularization, a novel approach for the adaptive choice of the learning rate in first-order descent methods.
Our approach modifies the objective function by adding a regularization term, and casts the joint process parameters.
arXiv Detail & Related papers (2021-04-12T13:13:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.