Thoughts on the Consistency between Ricci Flow and Neural Network
Behavior
- URL: http://arxiv.org/abs/2111.08410v1
- Date: Tue, 16 Nov 2021 12:23:09 GMT
- Title: Thoughts on the Consistency between Ricci Flow and Neural Network
Behavior
- Authors: Jun Chen, Tianxin Huang, Wenzhou Chen, Yong Liu
- Abstract summary: In this paper, we propose the linearly nearly Euclidean metric to assist manifold micro-surgery.
We prove the dynamical stability and convergence of the metrics close to the linearly nearly Euclidean metric under the Ricci-DeTurck flow.
- Score: 11.912554495037362
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Ricci flow is a partial differential equation for evolving the metric in
a Riemannian manifold to make it more regular. However, in most cases, the
Ricci flow tends to develop singularities and lead to divergence of the
solution. In this paper, we propose the linearly nearly Euclidean metric to
assist manifold micro-surgery, which means that we prove the dynamical
stability and convergence of the metrics close to the linearly nearly Euclidean
metric under the Ricci-DeTurck flow. In practice, from the information geometry
and mirror descent points of view, we give the steepest descent gradient flow
for neural networks on the linearly nearly Euclidean manifold. During the
training process of the neural network, we observe that its metric will also
regularly converge to the linearly nearly Euclidean metric, which is consistent
with the convergent behavior of linearly nearly Euclidean manifolds under
Ricci-DeTurck flow.
Related papers
- Kernel Approximation of Fisher-Rao Gradient Flows [52.154685604660465]
We present a rigorous investigation of Fisher-Rao and Wasserstein type gradient flows concerning their gradient structures, flow equations, and their kernel approximations.
Specifically, we focus on the Fisher-Rao geometry and its various kernel-based approximations, developing a principled theoretical framework.
arXiv Detail & Related papers (2024-10-27T22:52:08Z) - von Mises Quasi-Processes for Bayesian Circular Regression [57.88921637944379]
We explore a family of expressive and interpretable distributions over circle-valued random functions.
The resulting probability model has connections with continuous spin models in statistical physics.
For posterior inference, we introduce a new Stratonovich-like augmentation that lends itself to fast Markov Chain Monte Carlo sampling.
arXiv Detail & Related papers (2024-06-19T01:57:21Z) - Ricci flow-guided autoencoders in learning time-dependent dynamics [0.0]
We present a manifold-based autoencoder method for learning dynamics in time, notably partial differential equations (PDEs)
This can be accomplished by simulating Ricci flow in a physics-informed setting, and manifold quantities can be matched so that Ricci flow is empirically achieved.
We present our method on a range of experiments consisting of PDE data that encompasses desirable characteristics such as periodicity and randomness.
arXiv Detail & Related papers (2024-01-26T01:36:48Z) - The Fisher-Rao geometry of CES distributions [50.50897590847961]
The Fisher-Rao information geometry allows for leveraging tools from differential geometry.
We will present some practical uses of these geometric tools in the framework of elliptical distributions.
arXiv Detail & Related papers (2023-10-02T09:23:32Z) - Curvature-Independent Last-Iterate Convergence for Games on Riemannian
Manifolds [77.4346324549323]
We show that a step size agnostic to the curvature of the manifold achieves a curvature-independent and linear last-iterate convergence rate.
To the best of our knowledge, the possibility of curvature-independent rates and/or last-iterate convergence has not been considered before.
arXiv Detail & Related papers (2023-06-29T01:20:44Z) - Scalable Stochastic Gradient Riemannian Langevin Dynamics in Non-Diagonal Metrics [3.8811062755861956]
We propose two non-diagonal metrics that can be used in-gradient samplers to improve convergence and exploration.
We show that for fully connected neural networks (NNs) with sparsity-inducing priors and convolutional NNs with correlated priors, using these metrics can provide improvements.
arXiv Detail & Related papers (2023-03-09T08:20:28Z) - Learning Discretized Neural Networks under Ricci Flow [51.36292559262042]
We study Discretized Neural Networks (DNNs) composed of low-precision weights and activations.
DNNs suffer from either infinite or zero gradients due to the non-differentiable discrete function during training.
arXiv Detail & Related papers (2023-02-07T10:51:53Z) - A physics-informed search for metric solutions to Ricci flow, their
embeddings, and visualisation [0.0]
Neural networks with PDEs embedded in their loss functions are employed as a function approximators.
A general method is developed and applied to the real torus.
The validity of the solution is verified by comparing the time evolution of scalar curvature with that found using a standard PDE solver.
arXiv Detail & Related papers (2022-11-30T08:17:06Z) - Dynamically Stable Poincar\'e Embeddings for Neural Manifolds [10.76554740227876]
We prove that if initial metrics have an $L2$-norm perturbation which deviates from the Hyperbolic metric on the Poincar'e ball, the scaled Ricci-DeTurck flow of such metrics smoothly and exponentially converges to the Hyperbolic metric.
Specifically, the role of the Ricci flow is to serve as naturally evolving to the stable Poincar'e ball that will then be mapped back to the Euclidean space.
arXiv Detail & Related papers (2021-12-21T13:09:08Z) - A Unifying and Canonical Description of Measure-Preserving Diffusions [60.59592461429012]
A complete recipe of measure-preserving diffusions in Euclidean space was recently derived unifying several MCMC algorithms into a single framework.
We develop a geometric theory that improves and generalises this construction to any manifold.
arXiv Detail & Related papers (2021-05-06T17:36:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.