Implicit Multidimensional Projection of Local Subspaces
- URL: http://arxiv.org/abs/2009.03259v2
- Date: Thu, 20 Jul 2023 12:11:56 GMT
- Title: Implicit Multidimensional Projection of Local Subspaces
- Authors: Rongzheng Bian, Yumeng Xue, Liang Zhou, Jian Zhang, Baoquan Chen,
Daniel Weiskopf, Yunhai Wang
- Abstract summary: We propose a visualization method to understand the effect of multidimensional projection on local subspaces.
Our method is able to analyze the shape and directional information of the local subspace to gain more insights into the global structure of the data.
- Score: 42.86321366724868
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a visualization method to understand the effect of
multidimensional projection on local subspaces, using implicit function
differentiation. Here, we understand the local subspace as the multidimensional
local neighborhood of data points. Existing methods focus on the projection of
multidimensional data points, and the neighborhood information is ignored. Our
method is able to analyze the shape and directional information of the local
subspace to gain more insights into the global structure of the data through
the perception of local structures. Local subspaces are fitted by
multidimensional ellipses that are spanned by basis vectors. An accurate and
efficient vector transformation method is proposed based on analytical
differentiation of multidimensional projections formulated as implicit
functions. The results are visualized as glyphs and analyzed using a full set
of specifically-designed interactions supported in our efficient web-based
visualization tool. The usefulness of our method is demonstrated using various
multi- and high-dimensional benchmark datasets. Our implicit differentiation
vector transformation is evaluated through numerical comparisons; the overall
method is evaluated through exploration examples and use cases.
Related papers
- IsUMap: Manifold Learning and Data Visualization leveraging Vietoris-Rips filtrations [0.08796261172196743]
We present a systematic and detailed construction of a metric representation for locally distorted metric spaces.
Our approach addresses limitations in existing methods by accommodating non-uniform data distributions and intricate local geometries.
arXiv Detail & Related papers (2024-07-25T07:46:30Z) - Entropic Optimal Transport Eigenmaps for Nonlinear Alignment and Joint Embedding of High-Dimensional Datasets [11.105392318582677]
We propose a principled approach for aligning and jointly embedding a pair of datasets with theoretical guarantees.
Our approach leverages the leading singular vectors of the EOT plan matrix between two datasets to extract their shared underlying structure.
We show that in a high-dimensional regime, the EOT plan recovers the shared manifold structure by approximating a kernel function evaluated at the locations of the latent variables.
arXiv Detail & Related papers (2024-07-01T18:48:55Z) - Boundary Detection Algorithm Inspired by Locally Linear Embedding [8.259071011958254]
We propose a method for detecting boundary points inspired by the widely used locally linear embedding algorithm.
We implement this method using two nearest neighborhood search schemes: the $epsilon$-radius ball scheme and the $K$-nearest neighbor scheme.
arXiv Detail & Related papers (2024-06-26T16:05:57Z) - Learning Implicit Feature Alignment Function for Semantic Segmentation [51.36809814890326]
Implicit Feature Alignment function (IFA) is inspired by the rapidly expanding topic of implicit neural representations.
We show that IFA implicitly aligns the feature maps at different levels and is capable of producing segmentation maps in arbitrary resolutions.
Our method can be combined with improvement on various architectures, and it achieves state-of-the-art accuracy trade-off on common benchmarks.
arXiv Detail & Related papers (2022-06-17T09:40:14Z) - Incorporating Texture Information into Dimensionality Reduction for
High-Dimensional Images [65.74185962364211]
We present a method for incorporating neighborhood information into distance-based dimensionality reduction methods.
Based on a classification of different methods for comparing image patches, we explore a number of different approaches.
arXiv Detail & Related papers (2022-02-18T13:17:43Z) - UnProjection: Leveraging Inverse-Projections for Visual Analytics of
High-Dimensional Data [63.74032987144699]
We present NNInv, a deep learning technique with the ability to approximate the inverse of any projection or mapping.
NNInv learns to reconstruct high-dimensional data from any arbitrary point on a 2D projection space, giving users the ability to interact with the learned high-dimensional representation in a visual analytics system.
arXiv Detail & Related papers (2021-11-02T17:11:57Z) - Unsupervised Sentence-embeddings by Manifold Approximation and
Projection [3.04585143845864]
We propose a novel technique to generate sentence-embeddings in an unsupervised fashion by projecting the sentences onto a fixed-dimensional manifold.
We test our approach, which we term EMAP or Embeddings by Manifold Approximation and Projection, on six publicly available text-classification datasets of varying size and complexity.
arXiv Detail & Related papers (2021-02-07T13:27:58Z) - Manifold Partition Discriminant Analysis [42.11470531267327]
We propose a novel algorithm for supervised dimensionality reduction named Manifold Partition Discriminant Analysis (MPDA)
It aims to find a linear embedding space where the within-class similarity is achieved along the direction that is consistent with the local variation of the data manifold.
MPDA explicitly parameterizes the connections of tangent spaces and represents the data manifold in a piecewise manner.
arXiv Detail & Related papers (2020-11-23T16:33:23Z) - Two-Dimensional Semi-Nonnegative Matrix Factorization for Clustering [50.43424130281065]
We propose a new Semi-Nonnegative Matrix Factorization method for 2-dimensional (2D) data, named TS-NMF.
It overcomes the drawback of existing methods that seriously damage the spatial information of the data by converting 2D data to vectors in a preprocessing step.
arXiv Detail & Related papers (2020-05-19T05:54:14Z) - Spatial Pyramid Based Graph Reasoning for Semantic Segmentation [67.47159595239798]
We apply graph convolution into the semantic segmentation task and propose an improved Laplacian.
The graph reasoning is directly performed in the original feature space organized as a spatial pyramid.
We achieve comparable performance with advantages in computational and memory overhead.
arXiv Detail & Related papers (2020-03-23T12:28:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.