Kernel-Based Sparse Additive Nonlinear Model Structure Detection through a Linearization Approach
- URL: http://arxiv.org/abs/2508.01453v1
- Date: Sat, 02 Aug 2025 18:02:44 GMT
- Title: Kernel-Based Sparse Additive Nonlinear Model Structure Detection through a Linearization Approach
- Authors: Sadegh Ebrahimkhani, John Lataire,
- Abstract summary: We propose a data-driven approach to simplify a class of continuous-time NL system models.<n>For sparse additive NL models, our method identifies the number of NL subterms and their corresponding input spaces.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The choice of parameterization in Nonlinear (NL) system models greatly affects the quality of the estimated model. Overly complex models can be impractical and hard to interpret, necessitating data-driven methods for simpler and more accurate representations. In this paper, we propose a data-driven approach to simplify a class of continuous-time NL system models using linear approximations around varying operating points. Specifically, for sparse additive NL models, our method identifies the number of NL subterms and their corresponding input spaces. Under small-signal operation, we approximate the unknown NL system as a trajectory-scheduled Linear Parameter-Varying (LPV) system, with LPV coefficients representing the gradient of the NL function and indicating input sensitivity. Using this sensitivity measure, we determine the NL system's structure through LPV model reduction by identifying non-zero LPV coefficients and selecting scheduling parameters. We introduce two sparse estimators within a vector-valued Reproducing Kernel Hilbert Space (RKHS) framework to estimate the LPV coefficients while preserving their structural relationships. The structure of the sparse additive NL model is then determined by detecting non-zero elements in the gradient vector (LPV coefficients) and the Hessian matrix (Jacobian of the LPV coefficients). We propose two computationally tractable RKHS-based estimators for this purpose. The sparsified Hessian matrix reveals the NL model's structure, with numerical simulations confirming the approach's effectiveness.
Related papers
- Efficient identification of linear, parameter-varying, and nonlinear systems with noise models [1.6385815610837167]
We present a general system identification procedure capable of estimating a broad spectrum of state-space dynamical models.<n>We show that for this general class of model structures, the model dynamics can be separated into a deterministic process and a noise part.<n>We parameterize the involved nonlinear functional relations by means of artificial neural-networks (ANNs)
arXiv Detail & Related papers (2025-04-16T11:23:30Z) - An Iterative Bayesian Approach for System Identification based on Linear Gaussian Models [86.05414211113627]
We tackle the problem of system identification, where we select inputs, observe the corresponding outputs from the true system, and optimize the parameters of our model to best fit the data.<n>We propose a flexible and computationally tractable methodology that is compatible with any system and parametric family of models.
arXiv Detail & Related papers (2025-01-28T01:57:51Z) - GP-FL: Model-Based Hessian Estimation for Second-Order Over-the-Air Federated Learning [52.295563400314094]
Second-order methods are widely adopted to improve the convergence rate of learning algorithms.<n>This paper introduces a novel second-order FL framework tailored for wireless channels.
arXiv Detail & Related papers (2024-12-05T04:27:41Z) - Deep polytopic autoencoders for low-dimensional linear parameter-varying approximations and nonlinear feedback design [0.9187159782788578]
Polytopic autoencoders provide low-di-men-sion-al parametrizations of states in a polytope.<n>For nonlinear PDEs, this is readily applied to low-dimensional linear parameter-varying (LPV) approximations.
arXiv Detail & Related papers (2024-03-26T18:57:56Z) - Closed-form Filtering for Non-linear Systems [83.91296397912218]
We propose a new class of filters based on Gaussian PSD Models, which offer several advantages in terms of density approximation and computational efficiency.
We show that filtering can be efficiently performed in closed form when transitions and observations are Gaussian PSD Models.
Our proposed estimator enjoys strong theoretical guarantees, with estimation error that depends on the quality of the approximation and is adaptive to the regularity of the transition probabilities.
arXiv Detail & Related papers (2024-02-15T08:51:49Z) - Efficient Interpretable Nonlinear Modeling for Multiple Time Series [5.448070998907116]
This paper proposes an efficient nonlinear modeling approach for multiple time series.
It incorporates nonlinear interactions among different time-series variables.
Experimental results show that the proposed algorithm improves the identification of the support of the VAR coefficients in a parsimonious manner.
arXiv Detail & Related papers (2023-09-29T11:42:59Z) - Active-Learning-Driven Surrogate Modeling for Efficient Simulation of
Parametric Nonlinear Systems [0.0]
In absence of governing equations, we need to construct the parametric reduced-order surrogate model in a non-intrusive fashion.
Our work provides a non-intrusive optimality criterion to efficiently populate the parameter snapshots.
We propose an active-learning-driven surrogate model using kernel-based shallow neural networks.
arXiv Detail & Related papers (2023-06-09T18:01:14Z) - Gaussian process regression and conditional Karhunen-Lo\'{e}ve models
for data assimilation in inverse problems [68.8204255655161]
We present a model inversion algorithm, CKLEMAP, for data assimilation and parameter estimation in partial differential equation models.
The CKLEMAP method provides better scalability compared to the standard MAP method.
arXiv Detail & Related papers (2023-01-26T18:14:12Z) - Designing Universal Causal Deep Learning Models: The Case of Infinite-Dimensional Dynamical Systems from Stochastic Analysis [7.373617024876726]
Several non-linear operators in analysis depend on a temporal structure which is not leveraged by contemporary neural operators.<n>This paper introduces a deep learning model-design framework that takes suitable infinite-dimensional linear metric spaces.<n>We show that our framework can uniformly approximate on compact sets and across arbitrarily finite-time horizons H" or smooth trace class operators.
arXiv Detail & Related papers (2022-10-24T14:43:03Z) - Probabilistic Circuits for Variational Inference in Discrete Graphical
Models [101.28528515775842]
Inference in discrete graphical models with variational methods is difficult.
Many sampling-based methods have been proposed for estimating Evidence Lower Bound (ELBO)
We propose a new approach that leverages the tractability of probabilistic circuit models, such as Sum Product Networks (SPN)
We show that selective-SPNs are suitable as an expressive variational distribution, and prove that when the log-density of the target model is aweighted the corresponding ELBO can be computed analytically.
arXiv Detail & Related papers (2020-10-22T05:04:38Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Convergence and sample complexity of gradient methods for the model-free
linear quadratic regulator problem [27.09339991866556]
We show that ODE searches for optimal control for an unknown computation system by directly searching over the corresponding space of controllers.
We take a step towards demystifying the performance and efficiency of such methods by focusing on the gradient-flow dynamics set of stabilizing feedback gains and a similar result holds for the forward disctization of the ODE.
arXiv Detail & Related papers (2019-12-26T16:56:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.