A Hamiltonian driven Geometric Construction of Neural Networks on the Lognormal Statistical Manifold
- URL: http://arxiv.org/abs/2509.25778v1
- Date: Tue, 30 Sep 2025 04:47:17 GMT
- Title: A Hamiltonian driven Geometric Construction of Neural Networks on the Lognormal Statistical Manifold
- Authors: Prosper Rosaire Mama Assandje, Teumsa Aboubakar, Dongho Joseph, Takemi Nakamura,
- Abstract summary: This paper presents a method for constructing neural networks intrinsically on statistical manifold.<n>The construction is driven by the Hamiltonian system that is equivalent to the gradient flow on this manifold.<n>The proposed method offers a new paradigm for building learning systems grounded in the differential geometry of their underlying parameter spaces.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Bridging information geometry with machine learning, this paper presents a method for constructing neural networks intrinsically on statistical manifolds. We demonstrate this approach by formulating a neural network architecture directly on the lognormal statistical manifold. The construction is driven by the Hamiltonian system that is equivalent to the gradient flow on this manifold. First, we define the network's input values using the coordinate system of this Hamiltonian dynamics, naturally embedded in the Poincare disk. The core of our contribution lies in the derivation of the network's components from geometric principles: the rotation component of the synaptic weight matrix is determined by the Lie group action of SU(1,1) on the disk, while the activation function emerges from the symplectic structure of the system. We subsequently obtain the complete weight matrix, including its translation vector, and the resulting output values. This work shows that the lognormal manifold can be seamlessly viewed as a neural manifold, with its geometric properties dictating a unique and interpretable neural network structure. The proposed method offers a new paradigm for building learning systems grounded in the differential geometry of their underlying parameter spaces.
Related papers
- Disordered Dynamics in High Dimensions: Connections to Random Matrices and Machine Learning [52.26396748560348]
We provide an overview of high dimensional dynamical systems driven by random matrices.<n>We focus on applications to simple models of learning and generalization in machine learning theory.
arXiv Detail & Related papers (2026-01-03T00:12:32Z) - The Neural Differential Manifold: An Architecture with Explicit Geometric Structure [8.201374511929538]
This paper introduces the Neural Differential Manifold (NDM), a novel neural network architecture that explicitly incorporates geometric structure into its fundamental design.<n>We analyze the theoretical advantages of this approach, including its potential for more efficient optimization, enhanced continual learning, and applications in scientific discovery and controllable generative modeling.
arXiv Detail & Related papers (2025-10-29T02:24:27Z) - Local properties of neural networks through the lens of layer-wise Hessians [45.88028371034407]
We introduce a methodology for analyzing neural networks through the lens of layer-wise Hessian matrices.<n>The concept provides a formal tool for characterizing the local geometry of the parameter space.
arXiv Detail & Related papers (2025-10-20T12:34:31Z) - Neural Feature Geometry Evolves as Discrete Ricci Flow [5.645823801022895]
Deep neural networks learn feature representations via complex geometric transformations of the input data manifold.<n>We investigate neural feature geometry through the lens of discrete geometry.<n>We show that nonlinear activations play a crucial role in shaping feature geometry in feedforward neural networks.
arXiv Detail & Related papers (2025-09-26T13:57:46Z) - SpaceMesh: A Continuous Representation for Learning Manifold Surface Meshes [61.110517195874074]
We present a scheme to directly generate manifold, polygonal meshes of complex connectivity as the output of a neural network.<n>Our key innovation is to define a continuous latent connectivity space at each mesh, which implies the discrete mesh.<n>In applications, this approach not only yields high-quality outputs from generative models, but also enables directly learning challenging geometry processing tasks such as mesh repair.
arXiv Detail & Related papers (2024-09-30T17:59:03Z) - A quatum inspired neural network for geometric modeling [14.214656118952178]
We introduce an innovative equivariant Matrix Product State (MPS)-based message-passing strategy.
Our method effectively models complex many-body relationships, suppressing mean-field approximations.
It seamlessly replaces the standard message-passing and layer-aggregation modules intrinsic to geometric GNNs.
arXiv Detail & Related papers (2024-01-03T15:59:35Z) - A Hitchhiker's Guide to Geometric GNNs for 3D Atomic Systems [87.30652640973317]
Recent advances in computational modelling of atomic systems represent them as geometric graphs with atoms embedded as nodes in 3D Euclidean space.
Geometric Graph Neural Networks have emerged as the preferred machine learning architecture powering applications ranging from protein structure prediction to molecular simulations and material generation.
This paper provides a comprehensive and self-contained overview of the field of Geometric GNNs for 3D atomic systems.
arXiv Detail & Related papers (2023-12-12T18:44:19Z) - Applications of Machine Learning to Modelling and Analysing Dynamical
Systems [0.0]
We propose an architecture which combines existing Hamiltonian Neural Network structures into Adaptable Symplectic Recurrent Neural Networks.
This architecture is found to significantly outperform previously proposed neural networks when predicting Hamiltonian dynamics.
We show that this method works efficiently for single parameter potentials and provides accurate predictions even over long periods of time.
arXiv Detail & Related papers (2023-07-22T19:04:17Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - A singular Riemannian geometry approach to Deep Neural Networks II.
Reconstruction of 1-D equivalence classes [78.120734120667]
We build the preimage of a point in the output manifold in the input space.
We focus for simplicity on the case of neural networks maps from n-dimensional real spaces to (n - 1)-dimensional real spaces.
arXiv Detail & Related papers (2021-12-17T11:47:45Z) - Hyperbolic Neural Networks++ [66.16106727715061]
We generalize the fundamental components of neural networks in a single hyperbolic geometry model, namely, the Poincar'e ball model.
Experiments show the superior parameter efficiency of our methods compared to conventional hyperbolic components, and stability and outperformance over their Euclidean counterparts.
arXiv Detail & Related papers (2020-06-15T08:23:20Z) - Equivalence in Deep Neural Networks via Conjugate Matrix Ensembles [0.0]
A numerical approach is developed for detecting the equivalence of deep learning architectures.
The empirical evidence supports the it phenomenon that difference between spectral densities of neural architectures and corresponding it conjugate circular ensemble are vanishing.
arXiv Detail & Related papers (2020-06-14T12:34:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.