Deep Invertible Approximation of Topologically Rich Maps between
Manifolds
- URL: http://arxiv.org/abs/2210.00577v1
- Date: Sun, 2 Oct 2022 17:14:43 GMT
- Title: Deep Invertible Approximation of Topologically Rich Maps between
Manifolds
- Authors: Michael Puthawala, Matti Lassas, Ivan Dokmanic, Pekka Pankka, Maarten
de Hoop
- Abstract summary: We show how to design neural networks that allow for stable universal approximation of maps between topologically interesting manifold.
By exploiting the topological parallels between locally bilipschitz maps, covering spaces, and local homeomorphisms, we find that a novel network of the form $mathcalT circ p circ mathcalE$ is a universal approximator of local diffeomorphisms.
We also outline possible extensions of our architecture to address molecular imaging of molecules with symmetries.
- Score: 17.60434807901964
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: How can we design neural networks that allow for stable universal
approximation of maps between topologically interesting manifolds? The answer
is with a coordinate projection. Neural networks based on topological data
analysis (TDA) use tools such as persistent homology to learn topological
signatures of data and stabilize training but may not be universal
approximators or have stable inverses. Other architectures universally
approximate data distributions on submanifolds but only when the latter are
given by a single chart, making them unable to learn maps that change topology.
By exploiting the topological parallels between locally bilipschitz maps,
covering spaces, and local homeomorphisms, and by using universal approximation
arguments from machine learning, we find that a novel network of the form
$\mathcal{T} \circ p \circ \mathcal{E}$, where $\mathcal{E}$ is an injective
network, $p$ a fixed coordinate projection, and $\mathcal{T}$ a bijective
network, is a universal approximator of local diffeomorphisms between compact
smooth submanifolds embedded in $\mathbb{R}^n$. We emphasize the case when the
target map changes topology. Further, we find that by constraining the
projection $p$, multivalued inversions of our networks can be computed without
sacrificing universality. As an application, we show that learning a group
invariant function with unknown group action naturally reduces to the question
of learning local diffeomorphisms for finite groups. Our theory permits us to
recover orbits of the group action. We also outline possible extensions of our
architecture to address molecular imaging of molecules with symmetries.
Finally, our analysis informs the choice of topologically expressive starting
spaces in generative problems.
Related papers
- A rank decomposition for the topological classification of neural representations [0.0]
In this work, we leverage the fact that neural networks are equivalent to continuous piecewise-affine maps.
We study the homology groups of the quotient of a manifold $mathcalM$ and a subset $A$, assuming some minimal properties on these spaces.
We show that in randomly narrow networks, there will be regions in which the (co)homology groups of a data manifold can change.
arXiv Detail & Related papers (2024-04-30T17:01:20Z) - Improving embedding of graphs with missing data by soft manifolds [51.425411400683565]
The reliability of graph embeddings depends on how much the geometry of the continuous space matches the graph structure.
We introduce a new class of manifold, named soft manifold, that can solve this situation.
Using soft manifold for graph embedding, we can provide continuous spaces to pursue any task in data analysis over complex datasets.
arXiv Detail & Related papers (2023-11-29T12:48:33Z) - An Approximation Theory for Metric Space-Valued Functions With A View
Towards Deep Learning [25.25903127886586]
We build universal functions approximators of continuous maps between arbitrary Polish metric spaces $mathcalX$ and $mathcalY$.
In particular, we show that the required number of Dirac measures is determined by the structure of $mathcalX$ and $mathcalY$.
arXiv Detail & Related papers (2023-04-24T16:18:22Z) - Dist2Cycle: A Simplicial Neural Network for Homology Localization [66.15805004725809]
Simplicial complexes can be viewed as high dimensional generalizations of graphs that explicitly encode multi-way ordered relations.
We propose a graph convolutional model for learning functions parametrized by the $k$-homological features of simplicial complexes.
arXiv Detail & Related papers (2021-10-28T14:59:41Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - The decomposition of the higher-order homology embedding constructed
from the $k$-Laplacian [5.076419064097734]
The null space of the $k$-th order Laplacian $mathbfmathcal L_k$ encodes the non-trivial topology of a manifold or a network.
We propose an algorithm to factorize the homology embedding into subspaces corresponding to a manifold's simplest topological components.
arXiv Detail & Related papers (2021-07-23T00:40:01Z) - Geometric Deep Learning and Equivariant Neural Networks [0.9381376621526817]
We survey the mathematical foundations of geometric deep learning, focusing on group equivariant and gauge equivariant neural networks.
We develop gauge equivariant convolutional neural networks on arbitrary manifold $mathcalM$ using principal bundles with structure group $K$ and equivariant maps between sections of associated vector bundles.
We analyze several applications of this formalism, including semantic segmentation and object detection networks.
arXiv Detail & Related papers (2021-05-28T15:41:52Z) - A deep network construction that adapts to intrinsic dimensionality
beyond the domain [79.23797234241471]
We study the approximation of two-layer compositions $f(x) = g(phi(x))$ via deep networks with ReLU activation.
We focus on two intuitive and practically relevant choices for $phi$: the projection onto a low-dimensional embedded submanifold and a distance to a collection of low-dimensional sets.
arXiv Detail & Related papers (2020-08-06T09:50:29Z) - MDP Homomorphic Networks: Group Symmetries in Reinforcement Learning [90.20563679417567]
This paper introduces MDP homomorphic networks for deep reinforcement learning.
MDP homomorphic networks are neural networks that are equivariant under symmetries in the joint state-action space of an MDP.
We show that such networks converge faster than unstructured networks on CartPole, a grid world and Pong.
arXiv Detail & Related papers (2020-06-30T15:38:37Z) - Stochastic Flows and Geometric Optimization on the Orthogonal Group [52.50121190744979]
We present a new class of geometrically-driven optimization algorithms on the orthogonal group $O(d)$.
We show that our methods can be applied in various fields of machine learning including deep, convolutional and recurrent neural networks, reinforcement learning, flows and metric learning.
arXiv Detail & Related papers (2020-03-30T15:37:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.