Unsupervised Manifold Alignment with Joint Multidimensional Scaling
- URL: http://arxiv.org/abs/2207.02968v1
- Date: Wed, 6 Jul 2022 21:02:42 GMT
- Title: Unsupervised Manifold Alignment with Joint Multidimensional Scaling
- Authors: Dexiong Chen, Bowen Fan, Carlos Oliver, Karsten Borgwardt
- Abstract summary: We introduce Joint Multidimensional Scaling, which maps datasets from two different domains to a common low-dimensional Euclidean space.
Our approach integrates Multidimensional Scaling (MDS) and Wasserstein Procrustes analysis into a joint optimization problem.
We demonstrate the effectiveness of our approach in several applications, including joint visualization of two datasets, unsupervised heterogeneous domain adaptation, graph matching, and protein structure alignment.
- Score: 4.683612295430957
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce Joint Multidimensional Scaling, a novel approach for
unsupervised manifold alignment, which maps datasets from two different
domains, without any known correspondences between data instances across the
datasets, to a common low-dimensional Euclidean space. Our approach integrates
Multidimensional Scaling (MDS) and Wasserstein Procrustes analysis into a joint
optimization problem to simultaneously generate isometric embeddings of data
and learn correspondences between instances from two different datasets, while
only requiring intra-dataset pairwise dissimilarities as input. This unique
characteristic makes our approach applicable to datasets without access to the
input features, such as solving the inexact graph matching problem. We propose
an alternating optimization scheme to solve the problem that can fully benefit
from the optimization techniques for MDS and Wasserstein Procrustes. We
demonstrate the effectiveness of our approach in several applications,
including joint visualization of two datasets, unsupervised heterogeneous
domain adaptation, graph matching, and protein structure alignment.
Related papers
- Fast and Scalable Semi-Supervised Learning for Multi-View Subspace Clustering [13.638434337947302]
FSSMSC is a novel solution to the high computational complexity commonly found in existing approaches.
The method generates a consensus anchor graph across all views, representing each data point as a sparse linear combination of chosen landmarks.
The effectiveness and efficiency of FSSMSC are validated through extensive experiments on multiple benchmark datasets of varying scales.
arXiv Detail & Related papers (2024-08-11T06:54:00Z) - Entropic Optimal Transport Eigenmaps for Nonlinear Alignment and Joint Embedding of High-Dimensional Datasets [11.105392318582677]
We propose a principled approach for aligning and jointly embedding a pair of datasets with theoretical guarantees.
Our approach leverages the leading singular vectors of the EOT plan matrix between two datasets to extract their shared underlying structure.
We show that in a high-dimensional regime, the EOT plan recovers the shared manifold structure by approximating a kernel function evaluated at the locations of the latent variables.
arXiv Detail & Related papers (2024-07-01T18:48:55Z) - AlignMiF: Geometry-Aligned Multimodal Implicit Field for LiDAR-Camera
Joint Synthesis [98.3959800235485]
Recently, there exist some methods exploring multiple modalities within a single field, aiming to share implicit features from different modalities to enhance reconstruction performance.
In this work, we conduct comprehensive analyses on the multimodal implicit field of LiDAR-camera joint synthesis, revealing the underlying issue lies in the misalignment of different sensors.
We introduce AlignMiF, a geometrically aligned multimodal implicit field with two proposed modules: Geometry-Aware Alignment (GAA) and Shared Geometry Initialization (SGI)
arXiv Detail & Related papers (2024-02-27T13:08:47Z) - Distributional Reduction: Unifying Dimensionality Reduction and Clustering with Gromov-Wasserstein [56.62376364594194]
Unsupervised learning aims to capture the underlying structure of potentially large and high-dimensional datasets.
In this work, we revisit these approaches under the lens of optimal transport and exhibit relationships with the Gromov-Wasserstein problem.
This unveils a new general framework, called distributional reduction, that recovers DR and clustering as special cases and allows addressing them jointly within a single optimization problem.
arXiv Detail & Related papers (2024-02-03T19:00:19Z) - One for all: A novel Dual-space Co-training baseline for Large-scale
Multi-View Clustering [42.92751228313385]
We propose a novel multi-view clustering model, named Dual-space Co-training Large-scale Multi-view Clustering (DSCMC)
The main objective of our approach is to enhance the clustering performance by leveraging co-training in two distinct spaces.
Our algorithm has an approximate linear computational complexity, which guarantees its successful application on large-scale datasets.
arXiv Detail & Related papers (2024-01-28T16:30:13Z) - SIGMA: Scale-Invariant Global Sparse Shape Matching [50.385414715675076]
We propose a novel mixed-integer programming (MIP) formulation for generating precise sparse correspondences for non-rigid shapes.
We show state-of-the-art results for sparse non-rigid matching on several challenging 3D datasets.
arXiv Detail & Related papers (2023-08-16T14:25:30Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - AVIDA: Alternating method for Visualizing and Integrating Data [1.6637373649145604]
AVIDA is a framework for simultaneously performing data alignment and dimension reduction.
We show that AVIDA correctly aligns high-dimensional datasets without common features.
In general applications, other methods can be used for the alignment and dimension reduction modules.
arXiv Detail & Related papers (2022-05-31T22:36:10Z) - Auto-weighted Multi-view Feature Selection with Graph Optimization [90.26124046530319]
We propose a novel unsupervised multi-view feature selection model based on graph learning.
The contributions are threefold: (1) during the feature selection procedure, the consensus similarity graph shared by different views is learned.
Experiments on various datasets demonstrate the superiority of the proposed method compared with the state-of-the-art methods.
arXiv Detail & Related papers (2021-04-11T03:25:25Z) - Manifold Partition Discriminant Analysis [42.11470531267327]
We propose a novel algorithm for supervised dimensionality reduction named Manifold Partition Discriminant Analysis (MPDA)
It aims to find a linear embedding space where the within-class similarity is achieved along the direction that is consistent with the local variation of the data manifold.
MPDA explicitly parameterizes the connections of tangent spaces and represents the data manifold in a piecewise manner.
arXiv Detail & Related papers (2020-11-23T16:33:23Z) - Model Inversion Networks for Model-Based Optimization [110.24531801773392]
We propose model inversion networks (MINs), which learn an inverse mapping from scores to inputs.
MINs can scale to high-dimensional input spaces and leverage offline logged data for both contextual and non-contextual optimization problems.
We evaluate MINs on tasks from the Bayesian optimization literature, high-dimensional model-based optimization problems over images and protein designs, and contextual bandit optimization from logged data.
arXiv Detail & Related papers (2019-12-31T18:06:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.