Constructive Lyapunov Functions via Topology-Preserving Neural Networks
- URL: http://arxiv.org/abs/2510.24730v1
- Date: Fri, 10 Oct 2025 17:46:52 GMT
- Title: Constructive Lyapunov Functions via Topology-Preserving Neural Networks
- Authors: Jaehong Oh,
- Abstract summary: ONN achieves order-optimal performance on convergence rate, edge efficiency, and computational complexity.<n> Empirical validation on 3M-node semantic networks demonstrates 99.75% improvement over baseline methods.<n>ORTSF integration into transformers achieves 14.7% perplexity reduction and 2.3 faster convergence.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We prove that ONN achieves order-optimal performance on convergence rate ($\mu \propto \lambda_2$), edge efficiency ($E = N$ for minimal connectivity $k = 2$), and computational complexity ($O(N d^2)$). Empirical validation on 3M-node semantic networks demonstrates 99.75\% improvement over baseline methods, confirming exponential convergence ($\mu = 3.2 \times 10^{-4}$) and topology preservation. ORTSF integration into transformers achieves 14.7\% perplexity reduction and 2.3 faster convergence on WikiText-103. We establish deep connections to optimal control (Hamilton-Jacobi-Bellman), information geometry (Fisher-efficient natural gradient), topological data analysis (persistent homology computation in $O(KN)$), discrete geometry (Ricci flow), and category theory (adjoint functors). This work transforms Massera's abstract existence theorem into a concrete, scalable algorithm with provable guarantees, opening pathways for constructive stability analysis in neural networks, robotics, and distributed systems.
Related papers
- Riemannian Optimization in Modular Systems [0.006486143522483092]
The backpropagation algorithm is instrumental in the success of neural networks.<n>Despite its empirical success, a strong theoretical understanding of it is lacking.<n>We develop a framework of composable Riemannian modules'' whose convergence properties can be quantified.
arXiv Detail & Related papers (2026-03-04T00:47:29Z) - Connectivity-Guided Sparsification of 2-FWL GNNs: Preserving Full Expressivity with Improved Efficiency [15.330129666665927]
We propose textbfCo-Sparsify, a connectivity-aware sparsification framework.<n>Our key insight is that 3-node interactions are expressively necessary only within emphbiconnected components.<n>We prove that Co-Sparsify is as expressive as the 2-FWL test.
arXiv Detail & Related papers (2025-11-16T23:46:54Z) - An Information-Minimal Geometry for Qubit-Efficient Optimization [0.0]
We recast qubit-efficient optimization as a geometry problem.<n>Local-consistency problem coincides exactly with the Sherali-Adams level-2 polytope $mathrmSA(2)$.
arXiv Detail & Related papers (2025-11-11T15:38:57Z) - Comprehensive Validation of Replica Symmetry Breaking via Quantum Annealing: From Ground States to Topological Collapse [0.0]
We extend Giorgio Parisi's exact solution of the Sherrington-Kirkpatrick spin glass to 4000 spins.<n>We probe both the emergence and breakdown of replica symmetry breaking.<n>This comprehensive validation establishes quantum advantage for probing fundamental statistical mechanics in complex systems.
arXiv Detail & Related papers (2025-11-09T14:33:22Z) - Trace Regularity PINNs: Enforcing $\mathrm{H}^{\frac{1}{2}}(\partial Ω)$ for Boundary Data [0.48342038441006796]
We propose an enhanced physics-informed neural network (PINN)<n>The Trace Regularity Physics-Informed Neural Network (TRPINN) enforces the boundary loss in the Sobolev-Slobodeckij norm $H1/2(partial Omega)$.<n>By incorporating the exact $H1/2(partial Omega)$ norm, we show that the approximation converges to the true solution in the $H1(Omega)$ sense.
arXiv Detail & Related papers (2025-10-19T13:08:16Z) - FFT-Accelerated Auxiliary Variable MCMC for Fermionic Lattice Models: A Determinant-Free Approach with $O(N\log N)$ Complexity [52.3171766248012]
We introduce a Markov Chain Monte Carlo (MCMC) algorithm that dramatically accelerates the simulation of quantum many-body systems.<n>We validate our algorithm on benchmark quantum physics problems, accurately reproducing known theoretical results.<n>Our work provides a powerful tool for large-scale probabilistic inference and opens avenues for physics-inspired generative models.
arXiv Detail & Related papers (2025-10-13T07:57:21Z) - LLMs are Bayesian, in Expectation, not in Realization [0.0]
Large language models adapt to new tasks without parameter updates.<n>Recent empirical findings reveal a fundamental contradiction: transformers systematically violate the martingale property.<n>This violation challenges the theoretical foundations underlying uncertainty quantification in critical applications.
arXiv Detail & Related papers (2025-07-15T22:20:11Z) - Tensor Decomposition Networks for Fast Machine Learning Interatomic Potential Computations [48.46721044282335]
tensor decomposition networks (TDNs) achieve competitive performance with dramatic speedup in computations.<n>We evaluate TDNs on PubChemQCR, a newly curated molecular relaxation dataset containing 105 million DFT-calculated snapshots.<n>Results show that TDNs achieve competitive performance with dramatic speedup in computations.
arXiv Detail & Related papers (2025-07-01T18:46:27Z) - Distributed Learning over Arbitrary Topology: Linear Speed-Up with Polynomial Transient Time [3.1789549088190414]
We study a distributed learning problem in which $n$ agents, each with potentially heterogeneous local data, collaboratively share the sum of their local cost functions via peer-to-peer communication.<n>We propose a novel, emph Tree PushPull- (STPP), which employs two trees extracted from a general communication graph to distribute both model parameters and topological parameters.
arXiv Detail & Related papers (2025-03-20T13:11:44Z) - Constructive Universal Approximation and Finite Sample Memorization by Narrow Deep ReLU Networks [0.0]
We show that any dataset with $N$ distinct points in $mathbbRd$ and $M$ output classes can be exactly classified.<n>We also prove a universal approximation theorem in $Lp(Omega; mathbbRm)$ for any bounded domain.<n>Our results offer a unified and interpretable framework connecting controllability, expressivity, and training dynamics in deep neural networks.
arXiv Detail & Related papers (2024-09-10T14:31:21Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Equivalence Between SE(3) Equivariant Networks via Steerable Kernels and
Group Convolution [90.67482899242093]
A wide range of techniques have been proposed in recent years for designing neural networks for 3D data that are equivariant under rotation and translation of the input.
We provide an in-depth analysis of both methods and their equivalence and relate the two constructions to multiview convolutional networks.
We also derive new TFN non-linearities from our equivalence principle and test them on practical benchmark datasets.
arXiv Detail & Related papers (2022-11-29T03:42:11Z) - On the Convergence of Distributed Stochastic Bilevel Optimization
Algorithms over a Network [55.56019538079826]
Bilevel optimization has been applied to a wide variety of machine learning models.
Most existing algorithms restrict their single-machine setting so that they are incapable of handling distributed data.
We develop novel decentralized bilevel optimization algorithms based on a gradient tracking communication mechanism and two different gradients.
arXiv Detail & Related papers (2022-06-30T05:29:52Z) - Scalable Lipschitz Residual Networks with Convex Potential Flows [120.27516256281359]
We show that using convex potentials in a residual network gradient flow provides a built-in $1$-Lipschitz transformation.
A comprehensive set of experiments on CIFAR-10 demonstrates the scalability of our architecture and the benefit of our approach for $ell$ provable defenses.
arXiv Detail & Related papers (2021-10-25T07:12:53Z) - Stochastic Flows and Geometric Optimization on the Orthogonal Group [52.50121190744979]
We present a new class of geometrically-driven optimization algorithms on the orthogonal group $O(d)$.
We show that our methods can be applied in various fields of machine learning including deep, convolutional and recurrent neural networks, reinforcement learning, flows and metric learning.
arXiv Detail & Related papers (2020-03-30T15:37:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.