Computational Analysis of Deformable Manifolds: from Geometric Modelling
to Deep Learning
- URL: http://arxiv.org/abs/2009.01786v1
- Date: Thu, 3 Sep 2020 16:50:48 GMT
- Title: Computational Analysis of Deformable Manifolds: from Geometric Modelling
to Deep Learning
- Authors: Stefan C Schonsheck
- Abstract summary: We will show that the diversity of non-flat spaces provides a rich area of study.
We will explore geometric methods for shape processing and data analysis.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Leo Tolstoy opened his monumental novel Anna Karenina with the now famous
words: Happy families are all alike; every unhappy family is unhappy in its own
way A similar notion also applies to mathematical spaces: Every flat space is
alike; every unflat space is unflat in its own way. However, rather than being
a source of unhappiness, we will show that the diversity of non-flat spaces
provides a rich area of study. The genesis of the so-called big data era and
the proliferation of social and scientific databases of increasing size has led
to a need for algorithms that can efficiently process, analyze and, even
generate high dimensional data. However, the curse of dimensionality leads to
the fact that many classical approaches do not scale well with respect to the
size of these problems. One technique to avoid some of these ill-effects is to
exploit the geometric structure of coherent data. In this thesis, we will
explore geometric methods for shape processing and data analysis. More
specifically, we will study techniques for representing manifolds and signals
supported on them through a variety of mathematical tools including, but not
limited to, computational differential geometry, variational PDE modeling, and
deep learning. First, we will explore non-isometric shape matching through
variational modeling. Next, we will use ideas from parallel transport on
manifolds to generalize convolution and convolutional neural networks to
deformable manifolds. Finally, we conclude by proposing a novel auto-regressive
model for capturing the intrinsic geometry and topology of data. Throughout
this work, we will use the idea of computing correspondences as a though-line
to both motivate our work and analyze our results.
Related papers
- A Survey of Geometric Graph Neural Networks: Data Structures, Models and
Applications [67.33002207179923]
This paper presents a survey of data structures, models, and applications related to geometric GNNs.
We provide a unified view of existing models from the geometric message passing perspective.
We also summarize the applications as well as the related datasets to facilitate later research for methodology development and experimental evaluation.
arXiv Detail & Related papers (2024-03-01T12:13:04Z) - Geometry-Informed Neural Networks [15.27249535281444]
We introduce geometry-informed neural networks (GINNs)
GINNs are a framework for training shape-generative neural fields without data.
We apply GINNs to several validation problems and a realistic 3D engineering design problem.
arXiv Detail & Related papers (2024-02-21T18:50:12Z) - Geometric Neural Diffusion Processes [55.891428654434634]
We extend the framework of diffusion models to incorporate a series of geometric priors in infinite-dimension modelling.
We show that with these conditions, the generative functional model admits the same symmetry.
arXiv Detail & Related papers (2023-07-11T16:51:38Z) - Exploring Data Geometry for Continual Learning [64.4358878435983]
We study continual learning from a novel perspective by exploring data geometry for the non-stationary stream of data.
Our method dynamically expands the geometry of the underlying space to match growing geometric structures induced by new data.
Experiments show that our method achieves better performance than baseline methods designed in Euclidean space.
arXiv Detail & Related papers (2023-04-08T06:35:25Z) - Towards a mathematical understanding of learning from few examples with
nonlinear feature maps [68.8204255655161]
We consider the problem of data classification where the training set consists of just a few data points.
We reveal key relationships between the geometry of an AI model's feature space, the structure of the underlying data distributions, and the model's generalisation capabilities.
arXiv Detail & Related papers (2022-11-07T14:52:58Z) - Intrinsic Dimension for Large-Scale Geometric Learning [0.0]
A naive approach to determine the dimension of a dataset is based on the number of attributes.
More sophisticated methods derive a notion of intrinsic dimension (ID) that employs more complex feature functions.
arXiv Detail & Related papers (2022-10-11T09:50:50Z) - Algebraic Machine Learning with an Application to Chemistry [0.0]
We develop a machine learning pipeline that captures fine-grain geometric information without relying on smoothness assumptions.
In particular, we propose a for numerically detecting points lying near the singular locus of the underlying variety.
arXiv Detail & Related papers (2022-05-11T22:41:19Z) - NeuroMorph: Unsupervised Shape Interpolation and Correspondence in One
Go [109.88509362837475]
We present NeuroMorph, a new neural network architecture that takes as input two 3D shapes.
NeuroMorph produces smooth and point-to-point correspondences between them.
It works well for a large variety of input shapes, including non-isometric pairs from different object categories.
arXiv Detail & Related papers (2021-06-17T12:25:44Z) - Quadric hypersurface intersection for manifold learning in feature space [52.83976795260532]
manifold learning technique suitable for moderately high dimension and large datasets.
The technique is learned from the training data in the form of an intersection of quadric hypersurfaces.
At test time, this manifold can be used to introduce an outlier score for arbitrary new points.
arXiv Detail & Related papers (2021-02-11T18:52:08Z) - Identifying the latent space geometry of network models through analysis
of curvature [7.644165047073435]
We present a method to consistently estimate the manifold type, dimension, and curvature from an empirically relevant class of latent spaces.
Our core insight comes by representing the graph as a noisy distance matrix based on the ties between cliques.
arXiv Detail & Related papers (2020-12-19T00:35:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.