ICON: Learning Regular Maps Through Inverse Consistency
- URL: http://arxiv.org/abs/2105.04459v1
- Date: Mon, 10 May 2021 15:52:12 GMT
- Title: ICON: Learning Regular Maps Through Inverse Consistency
- Authors: Hastings Greer, Roland Kwitt, Francois-Xavier Vialard, Marc Niethammer
- Abstract summary: We explore what induces regularity for spatial transformations, e.g., when computing image registrations.
We find that deep networks combined with an inverse consistency loss and randomized off-grid yield well behaved, approximately diffeomorphic, spatial transformations.
Despite the simplicity of this approach, our experiments present compelling evidence, on both synthetic and real data, that regular maps can be obtained without carefully tuned explicit regularizers and competitive registration performance.
- Score: 19.27928605302463
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning maps between data samples is fundamental. Applications range from
representation learning, image translation and generative modeling, to the
estimation of spatial deformations. Such maps relate feature vectors, or map
between feature spaces. Well-behaved maps should be regular, which can be
imposed explicitly or may emanate from the data itself. We explore what induces
regularity for spatial transformations, e.g., when computing image
registrations. Classical optimization-based models compute maps between pairs
of samples and rely on an appropriate regularizer for well-posedness. Recent
deep learning approaches have attempted to avoid using such regularizers
altogether by relying on the sample population instead. We explore if it is
possible to obtain spatial regularity using an inverse consistency loss only
and elucidate what explains map regularity in such a context. We find that deep
networks combined with an inverse consistency loss and randomized off-grid
interpolation yield well behaved, approximately diffeomorphic, spatial
transformations. Despite the simplicity of this approach, our experiments
present compelling evidence, on both synthetic and real data, that regular maps
can be obtained without carefully tuned explicit regularizers and competitive
registration performance.
Related papers
- Disentangled Representation Learning with the Gromov-Monge Gap [65.73194652234848]
Learning disentangled representations from unlabelled data is a fundamental challenge in machine learning.
We introduce a novel approach to disentangled representation learning based on quadratic optimal transport.
We demonstrate the effectiveness of our approach for quantifying disentanglement across four standard benchmarks.
arXiv Detail & Related papers (2024-07-10T16:51:32Z) - Stochastic interpolants with data-dependent couplings [31.854717378556334]
We use the framework of interpolants to formalize how to itcouple the base and the target densities.
We show that these transport maps can be learned by solving a simple square loss regression problem analogous to the standard independent setting.
arXiv Detail & Related papers (2023-10-05T17:46:31Z) - Gradient-Based Feature Learning under Structured Data [57.76552698981579]
In the anisotropic setting, the commonly used spherical gradient dynamics may fail to recover the true direction.
We show that appropriate weight normalization that is reminiscent of batch normalization can alleviate this issue.
In particular, under the spiked model with a suitably large spike, the sample complexity of gradient-based training can be made independent of the information exponent.
arXiv Detail & Related papers (2023-09-07T16:55:50Z) - Spatially and Spectrally Consistent Deep Functional Maps [26.203493922746546]
Cycle consistency has long been exploited as a powerful prior for jointly optimizing maps within a collection of shapes.
In this paper, we investigate its utility in the approaches of Deep Functional Maps, which are considered state-of-the-art in non-rigid shape matching.
We present a novel design of unsupervised Deep Functional Maps, which effectively enforces the harmony of learned maps under the spectral and the point-wise representation.
arXiv Detail & Related papers (2023-08-17T09:04:44Z) - Asynchronously Trained Distributed Topographic Maps [0.0]
We present an algorithm that uses $N$ autonomous units to generate a feature map by distributed training.
Unit autonomy is achieved by sparse interaction in time & space through the combination of a distributed search, and a cascade-driven weight updating scheme.
arXiv Detail & Related papers (2023-01-20T01:15:56Z) - $\texttt{GradICON}$: Approximate Diffeomorphisms via Gradient Inverse
Consistency [16.72466200341455]
We use a neural network to predict a map between a source and a target image as well as the map when swapping the source and target images.
We achieve state-of-the-art registration performance on a variety of real-world medical image datasets.
arXiv Detail & Related papers (2022-06-13T04:03:49Z) - Entangled Residual Mappings [59.02488598557491]
We introduce entangled residual mappings to generalize the structure of the residual connections.
An entangled residual mapping replaces the identity skip connections with specialized entangled mappings.
We show that while entangled mappings can preserve the iterative refinement of features across various deep models, they influence the representation learning process in convolutional networks.
arXiv Detail & Related papers (2022-06-02T19:36:03Z) - Autoencoder Image Interpolation by Shaping the Latent Space [12.482988592988868]
Autoencoders represent an effective approach for computing the underlying factors characterizing datasets of different types.
We propose a regularization technique that shapes the latent representation to follow a manifold consistent with the training images.
arXiv Detail & Related papers (2020-08-04T12:32:54Z) - Making Affine Correspondences Work in Camera Geometry Computation [62.7633180470428]
Local features provide region-to-region rather than point-to-point correspondences.
We propose guidelines for effective use of region-to-region matches in the course of a full model estimation pipeline.
Experiments show that affine solvers can achieve accuracy comparable to point-based solvers at faster run-times.
arXiv Detail & Related papers (2020-07-20T12:07:48Z) - Attentive Normalization for Conditional Image Generation [126.08247355367043]
We characterize long-range dependence with attentive normalization (AN), which is an extension to traditional instance normalization.
Compared with self-attention GAN, our attentive normalization does not need to measure the correlation of all locations.
Experiments on class-conditional image generation and semantic inpainting verify the efficacy of our proposed module.
arXiv Detail & Related papers (2020-04-08T06:12:25Z) - Spatially Adaptive Inference with Stochastic Feature Sampling and
Interpolation [72.40827239394565]
We propose to compute features only at sparsely sampled locations.
We then densely reconstruct the feature map with an efficient procedure.
The presented network is experimentally shown to save substantial computation while maintaining accuracy over a variety of computer vision tasks.
arXiv Detail & Related papers (2020-03-19T15:36:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.