Generalized Penalty for Circular Coordinate Representation
- URL: http://arxiv.org/abs/2006.02554v3
- Date: Tue, 23 Nov 2021 18:38:02 GMT
- Title: Generalized Penalty for Circular Coordinate Representation
- Authors: Hengrui Luo, Alice Patania, Jisu Kim, Mikael Vejdemo-Johansson
- Abstract summary: Topological Data Analysis (TDA) provides novel approaches to analyze the geometrical shapes and topological structures of a dataset.
We propose a method to adapt the circular coordinate framework to take into account the roughness of circular coordinates in change-point and high-dimensional applications.
- Score: 4.054792094932801
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Topological Data Analysis (TDA) provides novel approaches that allow us to
analyze the geometrical shapes and topological structures of a dataset. As one
important application, TDA can be used for data visualization and dimension
reduction. We follow the framework of circular coordinate representation, which
allows us to perform dimension reduction and visualization for high-dimensional
datasets on a torus using persistent cohomology. In this paper, we propose a
method to adapt the circular coordinate framework to take into account the
roughness of circular coordinates in change-point and high-dimensional
applications. We use a generalized penalty function instead of an $L_{2}$
penalty in the traditional circular coordinate algorithm. We provide simulation
experiments and real data analysis to support our claim that circular
coordinates with generalized penalty will detect the change in high-dimensional
datasets under different sampling schemes while preserving the topological
structures.
Related papers
- KP-RED: Exploiting Semantic Keypoints for Joint 3D Shape Retrieval and Deformation [87.23575166061413]
KP-RED is a unified KeyPoint-driven REtrieval and Deformation framework.
It takes object scans as input and jointly retrieves and deforms the most geometrically similar CAD models.
arXiv Detail & Related papers (2024-03-15T08:44:56Z) - Distributional Reduction: Unifying Dimensionality Reduction and Clustering with Gromov-Wasserstein [56.62376364594194]
Unsupervised learning aims to capture the underlying structure of potentially large and high-dimensional datasets.
In this work, we revisit these approaches under the lens of optimal transport and exhibit relationships with the Gromov-Wasserstein problem.
This unveils a new general framework, called distributional reduction, that recovers DR and clustering as special cases and allows addressing them jointly within a single optimization problem.
arXiv Detail & Related papers (2024-02-03T19:00:19Z) - Improving embedding of graphs with missing data by soft manifolds [51.425411400683565]
The reliability of graph embeddings depends on how much the geometry of the continuous space matches the graph structure.
We introduce a new class of manifold, named soft manifold, that can solve this situation.
Using soft manifold for graph embedding, we can provide continuous spaces to pursue any task in data analysis over complex datasets.
arXiv Detail & Related papers (2023-11-29T12:48:33Z) - Shape-Graph Matching Network (SGM-net): Registration for Statistical
Shape Analysis [20.58923754314197]
This paper focuses on the statistical analysis of shapes of data objects called shape graphs.
A critical need here is a constrained registration of points (nodes to nodes, edges to edges) across objects.
This paper tackles this registration problem using a novel neural-network architecture.
arXiv Detail & Related papers (2023-08-14T00:42:03Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - Study of Manifold Geometry using Multiscale Non-Negative Kernel Graphs [32.40622753355266]
We propose a framework to study the geometric structure of the data.
We make use of our recently introduced non-negative kernel (NNK) regression graphs to estimate the point density, intrinsic dimension, and the linearity of the data manifold (curvature)
arXiv Detail & Related papers (2022-10-31T17:01:17Z) - Spherical Rotation Dimension Reduction with Geometric Loss Functions [0.0]
A prime example of such a dataset is a collection of cell cycle measurements, where the inherently cyclical nature of the process can be represented as a circle or sphere.
We propose a nonlinear dimension reduction method, Spherical Rotation Component Analysis (SRCA), that incorporates geometric information to better approximate low-dimensional manifold.
arXiv Detail & Related papers (2022-04-23T02:03:55Z) - Geometry-Aware Self-Training for Unsupervised Domain Adaptationon Object
Point Clouds [36.49322708074682]
This paper proposes a new method of geometry-aware self-training (GAST) for unsupervised domain adaptation of object point cloud classification.
Specifically, this paper aims to learn a domain-shared representation of semantic categories, via two novel self-supervised geometric learning tasks as feature regularization.
On the other hand, a diverse point distribution across datasets can be normalized with a novel curvature-aware distortion localization.
arXiv Detail & Related papers (2021-08-20T13:29:11Z) - LOCA: LOcal Conformal Autoencoder for standardized data coordinates [6.608924227377152]
We present a method for learning an embedding in $mathbbRd$ that is isometric to the latent variables of the manifold.
Our embedding is obtained using a LOcal Conformal Autoencoder (LOCA), an algorithm that constructs an embedding to rectify deformations.
We also apply LOCA to single-site Wi-Fi localization data, and to $3$-dimensional curved surface estimation.
arXiv Detail & Related papers (2020-04-15T17:49:37Z) - A Rotation-Invariant Framework for Deep Point Cloud Analysis [132.91915346157018]
We introduce a new low-level purely rotation-invariant representation to replace common 3D Cartesian coordinates as the network inputs.
Also, we present a network architecture to embed these representations into features, encoding local relations between points and their neighbors, and the global shape structure.
We evaluate our method on multiple point cloud analysis tasks, including shape classification, part segmentation, and shape retrieval.
arXiv Detail & Related papers (2020-03-16T14:04:45Z) - Gauge Equivariant Mesh CNNs: Anisotropic convolutions on geometric
graphs [81.12344211998635]
A common approach to define convolutions on meshes is to interpret them as a graph and apply graph convolutional networks (GCNs)
We propose Gauge Equivariant Mesh CNNs which generalize GCNs to apply anisotropic gauge equivariant kernels.
Our experiments validate the significantly improved expressivity of the proposed model over conventional GCNs and other methods.
arXiv Detail & Related papers (2020-03-11T17:21:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.