Semisupervised Manifold Alignment of Multimodal Remote Sensing Images
- URL: http://arxiv.org/abs/2104.07803v1
- Date: Thu, 15 Apr 2021 22:20:31 GMT
- Title: Semisupervised Manifold Alignment of Multimodal Remote Sensing Images
- Authors: Devis Tuia, Michele Volpi, Maxime Trolliet, Gustau Camps-Valls
- Abstract summary: We introduce a method for manifold alignment of different modalities (or domains) of remote sensing images.
The proposed semisupervised manifold alignment (SS-MA) method aligns the images working directly on their manifold.
We study the performance of SS-MA in toy examples and in real multiangular, multitemporal, and multisource image classification problems.
- Score: 12.833370786407668
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We introduce a method for manifold alignment of different modalities (or
domains) of remote sensing images. The problem is recurrent when a set of
multitemporal, multisource, multisensor and multiangular images is available.
In these situations, images should ideally be spatially coregistred, corrected
and compensated for differences in the image domains. Such procedures require
the interaction of the user, involve tuning of many parameters and heuristics,
and are usually applied separately. Changes of sensors and acquisition
conditions translate into shifts, twists, warps and foldings of the image
distributions (or manifolds). The proposed semisupervised manifold alignment
(SS-MA) method aligns the images working directly on their manifolds, and is
thus not restricted to images of the same resolutions, either spectral or
spatial. SS-MA pulls close together samples of the same class while pushing
those of different classes apart. At the same time, it preserves the geometry
of each manifold along the transformation. The method builds a linear
invertible transformation to a latent space where all images are alike, and
reduces to solving a generalized eigenproblem of moderate size. We study the
performance of SS-MA in toy examples and in real multiangular, multitemporal,
and multisource image classification problems. The method performs well for
strong deformations and leads to accurate classification for all domains.
Related papers
- A Global Depth-Range-Free Multi-View Stereo Transformer Network with Pose Embedding [76.44979557843367]
We propose a novel multi-view stereo (MVS) framework that gets rid of the depth range prior.
We introduce a Multi-view Disparity Attention (MDA) module to aggregate long-range context information.
We explicitly estimate the quality of the current pixel corresponding to sampled points on the epipolar line of the source image.
arXiv Detail & Related papers (2024-11-04T08:50:16Z) - Cross-Domain Separable Translation Network for Multimodal Image Change Detection [11.25422609271201]
multimodal change detection (MCD) is particularly critical in the remote sensing community.
This paper focuses on addressing the challenges of MCD, especially the difficulty in comparing images from different sensors.
A novel unsupervised cross-domain separable translation network (CSTN) is proposed to overcome these limitations.
arXiv Detail & Related papers (2024-07-23T03:56:02Z) - RecDiffusion: Rectangling for Image Stitching with Diffusion Models [53.824503710254206]
We introduce a novel diffusion-based learning framework, textbfRecDiffusion, for image stitching rectangling.
This framework combines Motion Diffusion Models (MDM) to generate motion fields, effectively transitioning from the stitched image's irregular borders to a geometrically corrected intermediary.
arXiv Detail & Related papers (2024-03-28T06:22:45Z) - Smooth image-to-image translations with latent space interpolations [64.8170758294427]
Multi-domain image-to-image (I2I) translations can transform a source image according to the style of a target domain.
We show that our regularization techniques can improve the state-of-the-art I2I translations by a large margin.
arXiv Detail & Related papers (2022-10-03T11:57:30Z) - Polymorphic-GAN: Generating Aligned Samples across Multiple Domains with
Learned Morph Maps [94.10535575563092]
We introduce a generative adversarial network that can simultaneously generate aligned image samples from multiple related domains.
We propose Polymorphic-GAN which learns shared features across all domains and a per-domain morph layer to morph shared features according to each domain.
arXiv Detail & Related papers (2022-06-06T21:03:02Z) - Automatic Registration of Images with Inconsistent Content Through
Line-Support Region Segmentation and Geometrical Outlier Removal [17.90609572352273]
This paper proposes an automatic image registration approach through line-support region segmentation and geometrical outlier removal (ALRS-GOR)
It is designed to address the problems associated with the registration of images with affine deformations and inconsistent content.
Various image sets have been considered for the evaluation of the proposed approach, including aerial images with simulated affine deformations.
arXiv Detail & Related papers (2022-04-02T10:47:16Z) - The Geometry of Deep Generative Image Models and its Applications [0.0]
Generative adversarial networks (GANs) have emerged as a powerful unsupervised method to model the statistical patterns of real-world data sets.
These networks are trained to map random inputs in their latent space to new samples representative of the learned data.
The structure of the latent space is hard to intuit due to its high dimensionality and the non-linearity of the generator.
arXiv Detail & Related papers (2021-01-15T07:57:33Z) - Multi-temporal and multi-source remote sensing image classification by
nonlinear relative normalization [17.124438150480326]
We study a methodology that aligns data from different domains in a nonlinear way through em kernelization.
We successfully test KEMA in multi-temporal and multi-source very high resolution classification tasks, as well as on the task of making a model invariant to shadowing for hyperspectral imaging.
arXiv Detail & Related papers (2020-12-07T08:46:11Z) - SMILE: Semantically-guided Multi-attribute Image and Layout Editing [154.69452301122175]
Attribute image manipulation has been a very active topic since the introduction of Generative Adversarial Networks (GANs)
We present a multimodal representation that handles all attributes, be it guided by random noise or images, while only using the underlying domain information of the target domain.
Our method is capable of adding, removing or changing either fine-grained or coarse attributes by using an image as a reference or by exploring the style distribution space.
arXiv Detail & Related papers (2020-10-05T20:15:21Z) - Unifying Specialist Image Embedding into Universal Image Embedding [84.0039266370785]
It is desirable to have a universal deep embedding model applicable to various domains of images.
We propose to distill the knowledge in multiple specialists into a universal embedding to solve this problem.
arXiv Detail & Related papers (2020-03-08T02:51:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.