Optimizing Parallel Schemes with Lyapunov Exponents and kNN-LLE Estimation
- URL: http://arxiv.org/abs/2601.13604v1
- Date: Tue, 20 Jan 2026 05:09:52 GMT
- Title: Optimizing Parallel Schemes with Lyapunov Exponents and kNN-LLE Estimation
- Authors: Mudassir Shams, Andrei Velichko, Bruno Carpentieri,
- Abstract summary: We present a unified analytical-data-driven methodology for identifying, measuring, and reducing such instabilities in inverse parallel solvers.<n>On the theoretical side, we derive stability and bifurcation characterizations of the underlying iterative maps.<n>On the computational side, we introduce a micro-series pipeline based on kNN-driven estimation of the local largest Lyapunov exponent.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inverse parallel schemes remain indispensable tools for computing the roots of nonlinear systems, yet their dynamical behavior can be unexpectedly rich, ranging from strong contraction to oscillatory or chaotic transients depending on the choice of algorithmic parameters and initial states. A unified analytical-data-driven methodology for identifying, measuring, and reducing such instabilities in a family of uni-parametric inverse parallel solvers is presented in this study. On the theoretical side, we derive stability and bifurcation characterizations of the underlying iterative maps, identifying parameter regions associated with periodic or chaotic behavior. On the computational side, we introduce a micro-series pipeline based on kNN-driven estimation of the local largest Lyapunov exponent (LLE), applied to scalar time series derived from solver trajectories. The resulting sliding-window Lyapunov profiles provide fine-grained, real-time diagnostics of contractive or unstable phases and reveal transient behaviors not captured by coarse linearized analysis. Leveraging this correspondence, we introduce a Lyapunov-informed parameter selection strategy that identifies solver settings associated with stable behavior, particularly when the estimated LLE indicates persistent instability. Comprehensive experiments on ensembles of perturbed initial guesses demonstrate close agreement between the theoretical stability diagrams and empirical Lyapunov profiles, and show that the proposed adaptive mechanism significantly improves robustness. The study establishes micro-series Lyapunov analysis as a practical, interpretable tool for constructing self-stabilizing root-finding schemes and opens avenues for extending such diagnostics to higher-dimensional or noise-contaminated problems.
Related papers
- Stability and Concentration in Nonlinear Inverse Problems with Block-Structured Parameters: Lipschitz Geometry, Identifiability, and an Application to Gaussian Splatting [0.552480439325792]
We develop an operator-theoretic framework for stability and statistical concentration in nonlinear inverse problems with block-structured parameters.<n>Overall, the analysis characterizes operator-level limits for a broad class of high-dimensional nonlinear inverse problems arising in modern imaging and differentiable rendering.
arXiv Detail & Related papers (2026-02-10T05:11:06Z) - ODELoRA: Training Low-Rank Adaptation by Solving Ordinary Differential Equations [54.886931928255564]
Low-rank adaptation (LoRA) has emerged as a widely adopted parameter-efficient fine-tuning method in deep transfer learning.<n>We propose a novel continuous-time optimization dynamic for LoRA factor matrices in the form of an ordinary differential equation (ODE)<n>We show that ODELoRA achieves stable feature learning, a property that is crucial for training deep neural networks at different scales of problem dimensionality.
arXiv Detail & Related papers (2026-02-07T10:19:36Z) - The Procrustean Bed of Time Series: The Optimization Bias of Point-wise Loss [53.542743390809356]
This paper aims to provide a first-principles analysis of the Expectation of Optimization Bias (EOB)<n>Our analysis reveals a fundamental paradigm paradox: the more deterministic and structured the time series, the more severe the bias by point-wise loss function.<n>We present a concrete solution that simultaneously achieves both principles via DFT or DWT.
arXiv Detail & Related papers (2025-12-21T06:08:22Z) - Revisiting Zeroth-Order Optimization: Minimum-Variance Two-Point Estimators and Directionally Aligned Perturbations [57.179679246370114]
We identify the distribution of random perturbations that minimizes the estimator's variance as the perturbation stepsize tends to zero.<n>Our findings reveal that such desired perturbations can align directionally with the true gradient, instead of maintaining a fixed length.
arXiv Detail & Related papers (2025-10-22T19:06:39Z) - Stabilization of nonlinear systems with unknown delays via delay-adaptive neural operator approximate predictors [6.093618731228799]
This work establishes the first rigorous stability guarantees for approximate predictors in delay-adaptive control of nonlinear systems.<n>We show that neural operators-a flexible class of neural network-based approximators-can achieve arbitrarily small approximation errors.
arXiv Detail & Related papers (2025-09-30T16:00:58Z) - Semi-parametric Functional Classification via Path Signatures Logistic Regression [1.210026603224224]
We propose Path Signatures Logistic Regression, a semi-parametric framework for classifying vector-valued functional data.<n>Our results highlight the practical and theoretical benefits of integrating rough path theory into modern functional data analysis.
arXiv Detail & Related papers (2025-07-09T08:06:50Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Stochastic Nonlinear Control via Finite-dimensional Spectral Dynamic Embedding [20.43835169613882]
This paper proposes an approach, Spectral Dynamics Embedding Control (SDEC), to optimal control for nonlinear systems.<n>It reveals an infinite-dimensional feature representation induced by the system's nonlinear dynamics, enabling a linear representation of the state-action value function.<n>For practical implementation, this representation is approximated using finite-dimensional truncations.
arXiv Detail & Related papers (2023-04-08T04:23:46Z) - Optimal variance-reduced stochastic approximation in Banach spaces [114.8734960258221]
We study the problem of estimating the fixed point of a contractive operator defined on a separable Banach space.
We establish non-asymptotic bounds for both the operator defect and the estimation error.
arXiv Detail & Related papers (2022-01-21T02:46:57Z) - Contraction Theory for Nonlinear Stability Analysis and Learning-based Control: A Tutorial Overview [13.228663415967624]
Contraction theory is an analytical tool to study differential dynamics of a non-autonomous (i.e., time-varying) nonlinear system.<n>It takes advantage of a superior property of exponential stability used in conjunction with the comparison lemma.<n>This yields much-needed safety and stability guarantees for neural network-based control and estimation schemes.
arXiv Detail & Related papers (2021-10-01T23:03:21Z) - The Connection between Discrete- and Continuous-Time Descriptions of
Gaussian Continuous Processes [60.35125735474386]
We show that discretizations yielding consistent estimators have the property of invariance under coarse-graining'
This result explains why combining differencing schemes for derivatives reconstruction and local-in-time inference approaches does not work for time series analysis of second or higher order differential equations.
arXiv Detail & Related papers (2021-01-16T17:11:02Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z) - Convergence and sample complexity of gradient methods for the model-free
linear quadratic regulator problem [27.09339991866556]
We show that ODE searches for optimal control for an unknown computation system by directly searching over the corresponding space of controllers.
We take a step towards demystifying the performance and efficiency of such methods by focusing on the gradient-flow dynamics set of stabilizing feedback gains and a similar result holds for the forward disctization of the ODE.
arXiv Detail & Related papers (2019-12-26T16:56:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.