Decoupling multivariate functions using a nonparametric filtered tensor
decomposition
- URL: http://arxiv.org/abs/2205.11153v1
- Date: Mon, 23 May 2022 09:34:17 GMT
- Title: Decoupling multivariate functions using a nonparametric filtered tensor
decomposition
- Authors: Jan Decuyper, Koen Tiels, Siep Weiland, Mark C. Runacres and Johan
Schoukens
- Abstract summary: Decoupling techniques aim at providing an alternative representation of the nonlinearity.
The so-called decoupled form is often a more efficient parameterisation of the relationship while being highly structured, favouring interpretability.
In this work two new algorithms, based on filtered tensor decompositions of first order derivative information are introduced.
- Score: 0.29360071145551075
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Multivariate functions emerge naturally in a wide variety of data-driven
models. Popular choices are expressions in the form of basis expansions or
neural networks. While highly effective, the resulting functions tend to be
hard to interpret, in part because of the large number of required parameters.
Decoupling techniques aim at providing an alternative representation of the
nonlinearity. The so-called decoupled form is often a more efficient
parameterisation of the relationship while being highly structured, favouring
interpretability. In this work two new algorithms, based on filtered tensor
decompositions of first order derivative information are introduced. The method
returns nonparametric estimates of smooth decoupled functions. Direct
applications are found in, i.a. the fields of nonlinear system identification
and machine learning.
Related papers
- Physics-informed AI and ML-based sparse system identification algorithm for discovery of PDE's representing nonlinear dynamic systems [0.0]
The proposed method is demonstrated to discover various differential equations at various noise levels, including three-dimensional, fourth-order, and stiff equations.
The parameter estimation converges accurately to the true values with a small coefficient of variation, suggesting robustness to the noise.
arXiv Detail & Related papers (2024-10-13T21:48:51Z) - Fast and interpretable Support Vector Classification based on the truncated ANOVA decomposition [0.0]
Support Vector Machines (SVMs) are an important tool for performing classification on scattered data.
We propose solving SVMs in primal form using feature maps based on trigonometric functions or wavelets.
arXiv Detail & Related papers (2024-02-04T10:27:42Z) - Equivariance with Learned Canonicalization Functions [77.32483958400282]
We show that learning a small neural network to perform canonicalization is better than using predefineds.
Our experiments show that learning the canonicalization function is competitive with existing techniques for learning equivariant functions across many tasks.
arXiv Detail & Related papers (2022-11-11T21:58:15Z) - Functional Nonlinear Learning [0.0]
We propose a functional nonlinear learning (FunNoL) method to represent multivariate functional data in a lower-dimensional feature space.
We show that FunNoL provides satisfactory curve classification and reconstruction regardless of data sparsity.
arXiv Detail & Related papers (2022-06-22T23:47:45Z) - Adjoint-aided inference of Gaussian process driven differential
equations [0.8257490175399691]
We show how the adjoint of a linear system can be used to efficiently infer forcing functions modelled as GPs.
We demonstrate the approach on systems of both ordinary and partial differential equations.
arXiv Detail & Related papers (2022-02-09T17:35:14Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Learning to Guide Random Search [111.71167792453473]
We consider derivative-free optimization of a high-dimensional function that lies on a latent low-dimensional manifold.
We develop an online learning approach that learns this manifold while performing the optimization.
We empirically evaluate the method on continuous optimization benchmarks and high-dimensional continuous control problems.
arXiv Detail & Related papers (2020-04-25T19:21:14Z) - Flexible Bayesian Nonlinear Model Configuration [10.865434331546126]
Linear, or simple parametric, models are often not sufficient to describe complex relationships between input variables and a response.
We introduce a flexible approach for the construction and selection of highly flexible nonlinear parametric regression models.
A genetically modified mode jumping chain Monte Carlo algorithm is adopted to perform Bayesian inference.
arXiv Detail & Related papers (2020-03-05T21:20:55Z) - Invariant Feature Coding using Tensor Product Representation [75.62232699377877]
We prove that the group-invariant feature vector contains sufficient discriminative information when learning a linear classifier.
A novel feature model that explicitly consider group action is proposed for principal component analysis and k-means clustering.
arXiv Detail & Related papers (2019-06-05T07:15:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.