Stable-by-Design Neural Network-Based LPV State-Space Models for System Identification
- URL: http://arxiv.org/abs/2510.24757v1
- Date: Tue, 21 Oct 2025 10:25:54 GMT
- Title: Stable-by-Design Neural Network-Based LPV State-Space Models for System Identification
- Authors: Ahmet Eren Sertbaş, Tufan Kumbasar,
- Abstract summary: We propose a neural network-based state-space model that simultaneously learns latent states and internal scheduling variables.<n>The state-transition matrix is guaranteed to be stable through a Schur-based parameterization.<n>The proposed NN-SS is evaluated on benchmark nonlinear systems, and the results demonstrate that the model consistently matches or surpasses classical subspace identification methods.
- Score: 6.5745172279769255
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Accurate modeling of nonlinear systems is essential for reliable control, yet conventional identification methods often struggle to capture latent dynamics while maintaining stability. We propose a \textit{stable-by-design LPV neural network-based state-space} (NN-SS) model that simultaneously learns latent states and internal scheduling variables directly from data. The state-transition matrix, generated by a neural network using the learned scheduling variables, is guaranteed to be stable through a Schur-based parameterization. The architecture combines an encoder for initial state estimation with a state-space representer network that constructs the full set of scheduling-dependent system matrices. For training the NN-SS, we develop a framework that integrates multi-step prediction losses with a state-consistency regularization term, ensuring robustness against drift and improving long-horizon prediction accuracy. The proposed NN-SS is evaluated on benchmark nonlinear systems, and the results demonstrate that the model consistently matches or surpasses classical subspace identification methods and recent gradient-based approaches. These findings highlight the potential of stability-constrained neural LPV identification as a scalable and reliable framework for modeling complex nonlinear systems.
Related papers
- LILAD: Learning In-context Lyapunov-stable Adaptive Dynamics Models [4.66260462241022]
LILAD is a novel framework for system identification that jointly guarantees stability and adaptability.<n>We evaluate LILAD on benchmark autonomous systems and demonstrate that it outperforms adaptive, robust, and non-adaptive baselines in predictive accuracy.
arXiv Detail & Related papers (2025-11-26T19:20:49Z) - Online Bayesian Experimental Design for Partially Observed Dynamical Systems [10.774974720491565]
We develop a principled framework for optimizing data collection in dynamical systems with partial observability.<n>Our framework successfully handles both partial observability and online inference.
arXiv Detail & Related papers (2025-11-06T14:29:05Z) - Designing Robust Software Sensors for Nonlinear Systems via Neural Networks and Adaptive Sliding Mode Control [2.884893167166808]
This paper presents a novel approach to designing software sensors for nonlinear dynamical systems.<n>Unlike traditional model-based observers that rely on explicit transformations or linearization, the proposed framework integrates neural networks with adaptive Sliding Mode Control (SMC)<n>The training methodology leverages the system's governing equations as a physics-based constraint, enabling observer synthesis without access to ground-truth state trajectories.
arXiv Detail & Related papers (2025-07-09T13:06:58Z) - PINN-Obs: Physics-Informed Neural Network-Based Observer for Nonlinear Dynamical Systems [2.884893167166808]
This paper introduces a novel Adaptive Physics-Informed Neural Network-based Observer (PINN-Obs) for accurate state estimation in nonlinear systems.<n>Unlike traditional model-based observers, which require explicit system transformations or linearization, the proposed framework directly integrates system dynamics and sensor data into a physics-informed learning process.
arXiv Detail & Related papers (2025-07-09T10:09:45Z) - Certified Neural Approximations of Nonlinear Dynamics [51.01318247729693]
In safety-critical contexts, the use of neural approximations requires formal bounds on their closeness to the underlying system.<n>We propose a novel, adaptive, and parallelizable verification method based on certified first-order models.
arXiv Detail & Related papers (2025-05-21T13:22:20Z) - Efficient Transformed Gaussian Process State-Space Models for Non-Stationary High-Dimensional Dynamical Systems [49.819436680336786]
We propose an efficient transformed Gaussian process state-space model (ETGPSSM) for scalable and flexible modeling of high-dimensional, non-stationary dynamical systems.<n>Specifically, our ETGPSSM integrates a single shared GP with input-dependent normalizing flows, yielding an expressive implicit process prior that captures complex, non-stationary transition dynamics.<n>Our ETGPSSM outperforms existing GPSSMs and neural network-based SSMs in terms of computational efficiency and accuracy.
arXiv Detail & Related papers (2025-03-24T03:19:45Z) - Self-Organizing Recurrent Stochastic Configuration Networks for Nonstationary Data Modelling [3.8719670789415925]
Recurrent configuration networks (RSCNs) are a class of randomized models that have shown promise in modelling nonlinear dynamics.
This paper aims at developing a self-organizing version of RSCNs, termed as SORSCNs, to enhance the continuous learning ability of the network for modelling nonstationary data.
arXiv Detail & Related papers (2024-10-14T01:28:25Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Robust stabilization of polytopic systems via fast and reliable neural
network-based approximations [2.2299983745857896]
We consider the design of fast and reliable neural network (NN)-based approximations of traditional stabilizing controllers for linear systems with polytopic uncertainty.
We certify the closed-loop stability and performance of a linear uncertain system when a trained rectified linear unit (ReLU)-based approximation replaces such traditional controllers.
arXiv Detail & Related papers (2022-04-27T21:58:07Z) - A Priori Denoising Strategies for Sparse Identification of Nonlinear
Dynamical Systems: A Comparative Study [68.8204255655161]
We investigate and compare the performance of several local and global smoothing techniques to a priori denoise the state measurements.
We show that, in general, global methods, which use the entire measurement data set, outperform local methods, which employ a neighboring data subset around a local point.
arXiv Detail & Related papers (2022-01-29T23:31:25Z) - KalmanNet: Neural Network Aided Kalman Filtering for Partially Known
Dynamics [84.18625250574853]
We present KalmanNet, a real-time state estimator that learns from data to carry out Kalman filtering under non-linear dynamics.
We numerically demonstrate that KalmanNet overcomes nonlinearities and model mismatch, outperforming classic filtering methods.
arXiv Detail & Related papers (2021-07-21T12:26:46Z) - Stabilizing Equilibrium Models by Jacobian Regularization [151.78151873928027]
Deep equilibrium networks (DEQs) are a new class of models that eschews traditional depth in favor of finding the fixed point of a single nonlinear layer.
We propose a regularization scheme for DEQ models that explicitly regularizes the Jacobian of the fixed-point update equations to stabilize the learning of equilibrium models.
We show that this regularization adds only minimal computational cost, significantly stabilizes the fixed-point convergence in both forward and backward passes, and scales well to high-dimensional, realistic domains.
arXiv Detail & Related papers (2021-06-28T00:14:11Z) - Recurrent Equilibrium Networks: Flexible Dynamic Models with Guaranteed
Stability and Robustness [3.2872586139884623]
This paper introduces recurrent equilibrium networks (RENs) for applications in machine learning, system identification and control.
RENs are parameterized directly by quadratic vector in RN, i.e. stability and robustness are ensured without parameter constraints.
The paper also presents applications in data-driven nonlinear observer design and control with stability guarantees.
arXiv Detail & Related papers (2021-04-13T05:09:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.