Ricci flow-guided autoencoders in learning time-dependent dynamics
- URL: http://arxiv.org/abs/2401.14591v9
- Date: Mon, 17 Feb 2025 20:05:44 GMT
- Title: Ricci flow-guided autoencoders in learning time-dependent dynamics
- Authors: Andrew Gracyk,
- Abstract summary: We present a manifold-based autoencoder method for learning dynamics in time, notably partial differential equations (PDEs)
This can be accomplished by parameterizing the latent manifold stage and subsequently simulating Ricci flow in a physics-informed setting.
We showcase that the Ricci flow facilitates qualities such as learning for out-of-distribution data and adversarial robustness on select PDE data.
- Score: 0.0
- License:
- Abstract: We present a manifold-based autoencoder method for learning dynamics in time, notably partial differential equations (PDEs), in which the manifold latent space evolves according to Ricci flow. This can be accomplished by parameterizing the latent manifold stage and subsequently simulating Ricci flow in a physics-informed setting, matching manifold quantities so that Ricci flow is empirically achieved. We emphasize dynamics that admit low-dimensional representations. With our method, the manifold, induced by the metric, is discerned through the training procedure, while the latent evolution due to Ricci flow provides an accommodating representation. By use of this flow, we sustain a canonical manifold latent representation for all values in the ambient PDE time interval continuum. We showcase that the Ricci flow facilitates qualities such as learning for out-of-distribution data and adversarial robustness on select PDE data. Moreover, we provide a thorough expansion of our methods in regard to special cases, such as neural discovery of non-parametric geometric flows based on conformally flat metrics with entropic strategies from Ricci flow theory.
Related papers
- Physically Interpretable Representation and Controlled Generation for Turbulence Data [39.42376941186934]
This paper proposes a data-driven approach to encode high-dimensional scientific data into low-dimensional, physically meaningful representations.
We validate our approach using 2D Navier-Stokes simulations of flow past a cylinder over a range of Reynolds numbers.
arXiv Detail & Related papers (2025-01-31T17:51:14Z) - Kernel Approximation of Fisher-Rao Gradient Flows [52.154685604660465]
We present a rigorous investigation of Fisher-Rao and Wasserstein type gradient flows concerning their gradient structures, flow equations, and their kernel approximations.
Specifically, we focus on the Fisher-Rao geometry and its various kernel-based approximations, developing a principled theoretical framework.
arXiv Detail & Related papers (2024-10-27T22:52:08Z) - Variational autoencoders with latent high-dimensional steady geometric flows for dynamics [0.0]
We develop approaches to variational autoencoders (VAEs) for PDE-type ambient data with regularizing geometric latent dynamics.
We redevelop the VAE framework such that manifold geometries, subject to our geometric flow, are learned in the intermediary latent space developed by encoders and decoders.
We demonstrate, on our datasets of interest, our methods perform at least as well as the traditional VAE, and oftentimes better.
arXiv Detail & Related papers (2024-10-14T04:07:45Z) - Generative Modeling on Manifolds Through Mixture of Riemannian Diffusion Processes [57.396578974401734]
We introduce a principled framework for building a generative diffusion process on general manifold.
Instead of following the denoising approach of previous diffusion models, we construct a diffusion process using a mixture of bridge processes.
We develop a geometric understanding of the mixture process, deriving the drift as a weighted mean of tangent directions to the data points.
arXiv Detail & Related papers (2023-10-11T06:04:40Z) - Learning Discretized Neural Networks under Ricci Flow [48.47315844022283]
We study Discretized Neural Networks (DNNs) composed of low-precision weights and activations.
DNNs suffer from either infinite or zero gradients due to the non-differentiable discrete function during training.
arXiv Detail & Related papers (2023-02-07T10:51:53Z) - A physics-informed search for metric solutions to Ricci flow, their
embeddings, and visualisation [0.0]
Neural networks with PDEs embedded in their loss functions are employed as a function approximators.
A general method is developed and applied to the real torus.
The validity of the solution is verified by comparing the time evolution of scalar curvature with that found using a standard PDE solver.
arXiv Detail & Related papers (2022-11-30T08:17:06Z) - Manifold Interpolating Optimal-Transport Flows for Trajectory Inference [64.94020639760026]
We present a method called Manifold Interpolating Optimal-Transport Flow (MIOFlow)
MIOFlow learns, continuous population dynamics from static snapshot samples taken at sporadic timepoints.
We evaluate our method on simulated data with bifurcations and merges, as well as scRNA-seq data from embryoid body differentiation, and acute myeloid leukemia treatment.
arXiv Detail & Related papers (2022-06-29T22:19:03Z) - Dynamically Stable Poincar\'e Embeddings for Neural Manifolds [10.76554740227876]
We prove that if initial metrics have an $L2$-norm perturbation which deviates from the Hyperbolic metric on the Poincar'e ball, the scaled Ricci-DeTurck flow of such metrics smoothly and exponentially converges to the Hyperbolic metric.
Specifically, the role of the Ricci flow is to serve as naturally evolving to the stable Poincar'e ball that will then be mapped back to the Euclidean space.
arXiv Detail & Related papers (2021-12-21T13:09:08Z) - Thoughts on the Consistency between Ricci Flow and Neural Network
Behavior [11.912554495037362]
In this paper, we propose the linearly nearly Euclidean metric to assist manifold micro-surgery.
We prove the dynamical stability and convergence of the metrics close to the linearly nearly Euclidean metric under the Ricci-DeTurck flow.
arXiv Detail & Related papers (2021-11-16T12:23:09Z) - A Kernel-Based Approach to Non-Stationary Reinforcement Learning in
Metric Spaces [53.47210316424326]
KeRNS is an algorithm for episodic reinforcement learning in non-stationary Markov Decision Processes.
We prove a regret bound that scales with the covering dimension of the state-action space and the total variation of the MDP with time.
arXiv Detail & Related papers (2020-07-09T21:37:13Z) - A Near-Optimal Gradient Flow for Learning Neural Energy-Based Models [93.24030378630175]
We propose a novel numerical scheme to optimize the gradient flows for learning energy-based models (EBMs)
We derive a second-order Wasserstein gradient flow of the global relative entropy from Fokker-Planck equation.
Compared with existing schemes, Wasserstein gradient flow is a smoother and near-optimal numerical scheme to approximate real data densities.
arXiv Detail & Related papers (2019-10-31T02:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.