TMPNN: High-Order Polynomial Regression Based on Taylor Map
Factorization
- URL: http://arxiv.org/abs/2307.16105v1
- Date: Sun, 30 Jul 2023 01:52:00 GMT
- Title: TMPNN: High-Order Polynomial Regression Based on Taylor Map
Factorization
- Authors: Andrei Ivanov, Stefan Maria Ailuro
- Abstract summary: The paper presents a method for constructing a high-order regression based on the Taylor map factorization.
By benchmarking on UCI open access datasets, we demonstrate that the proposed method performs comparable to the state-of-the-art regression methods.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Polynomial regression is widely used and can help to express nonlinear
patterns. However, considering very high polynomial orders may lead to
overfitting and poor extrapolation ability for unseen data. The paper presents
a method for constructing a high-order polynomial regression based on the
Taylor map factorization. This method naturally implements multi-target
regression and can capture internal relationships between targets.
Additionally, we introduce an approach for model interpretation in the form of
systems of differential equations. By benchmarking on UCI open access datasets,
Feynman symbolic regression datasets, and Friedman-1 datasets, we demonstrate
that the proposed method performs comparable to the state-of-the-art regression
methods and outperforms them on specific tasks.
Related papers
- Induced Covariance for Causal Discovery in Linear Sparse Structures [55.2480439325792]
Causal models seek to unravel the cause-effect relationships among variables from observed data.
This paper introduces a novel causal discovery algorithm designed for settings in which variables exhibit linearly sparse relationships.
arXiv Detail & Related papers (2024-10-02T04:01:38Z) - GINN-LP: A Growing Interpretable Neural Network for Discovering
Multivariate Laurent Polynomial Equations [1.1142444517901018]
We propose GINN-LP, an interpretable neural network, to discover the form of a Laurent Polynomial equation.
To the best of our knowledge, this is the first neural network that can discover arbitrary terms without any prior information on the order.
We show that GINN-LP outperforms the state-of-theart symbolic regression methods on datasets.
arXiv Detail & Related papers (2023-12-18T03:44:29Z) - An Efficient Data Analysis Method for Big Data using Multiple-Model
Linear Regression [4.085654010023149]
This paper introduces a new data analysis method for big data using a newly defined regression model named multiple model linear regression(MMLR)
The proposed data analysis method is shown to be more efficient and flexible than other regression based methods.
arXiv Detail & Related papers (2023-08-24T10:20:15Z) - Probabilistic Unrolling: Scalable, Inverse-Free Maximum Likelihood
Estimation for Latent Gaussian Models [69.22568644711113]
We introduce probabilistic unrolling, a method that combines Monte Carlo sampling with iterative linear solvers to circumvent matrix inversions.
Our theoretical analyses reveal that unrolling and backpropagation through the iterations of the solver can accelerate gradient estimation for maximum likelihood estimation.
In experiments on simulated and real data, we demonstrate that probabilistic unrolling learns latent Gaussian models up to an order of magnitude faster than gradient EM, with minimal losses in model performance.
arXiv Detail & Related papers (2023-06-05T21:08:34Z) - Graph Polynomial Convolution Models for Node Classification of
Non-Homophilous Graphs [52.52570805621925]
We investigate efficient learning from higher-order graph convolution and learning directly from adjacency matrix for node classification.
We show that the resulting model lead to new graphs and residual scaling parameter.
We demonstrate that the proposed methods obtain improved accuracy for node-classification of non-homophilous parameters.
arXiv Detail & Related papers (2022-09-12T04:46:55Z) - Robust Regularized Low-Rank Matrix Models for Regression and
Classification [14.698622796774634]
We propose a framework for matrix variate regression models based on a rank constraint, vector regularization (e.g., sparsity), and a general loss function.
We show that the algorithm is guaranteed to converge, all accumulation points of the algorithm have estimation errors in the order of $O(sqrtn)$ally and substantially attaining the minimax rate.
arXiv Detail & Related papers (2022-05-14T18:03:48Z) - A Hypergradient Approach to Robust Regression without Correspondence [85.49775273716503]
We consider a variant of regression problem, where the correspondence between input and output data is not available.
Most existing methods are only applicable when the sample size is small.
We propose a new computational framework -- ROBOT -- for the shuffled regression problem.
arXiv Detail & Related papers (2020-11-30T21:47:38Z) - Estimation of Switched Markov Polynomial NARX models [75.91002178647165]
We identify a class of models for hybrid dynamical systems characterized by nonlinear autoregressive (NARX) components.
The proposed approach is demonstrated on a SMNARX problem composed by three nonlinear sub-models with specific regressors.
arXiv Detail & Related papers (2020-09-29T15:00:47Z) - Model Fusion with Kullback--Leibler Divergence [58.20269014662046]
We propose a method to fuse posterior distributions learned from heterogeneous datasets.
Our algorithm relies on a mean field assumption for both the fused model and the individual dataset posteriors.
arXiv Detail & Related papers (2020-07-13T03:27:45Z) - AI Feynman 2.0: Pareto-optimal symbolic regression exploiting graph
modularity [8.594811303203581]
We present an improved method for symbolic regression that seeks to fit data to formulas that are Pareto-optimal.
It improves on the previous state-of-the-art by typically being orders of magnitude more robust toward noise and bad data.
We develop a method for discovering generalized symmetries from gradient properties of a neural network fit.
arXiv Detail & Related papers (2020-06-18T18:01:19Z) - PIANO: A Fast Parallel Iterative Algorithm for Multinomial and Sparse
Multinomial Logistic Regression [0.0]
We show that PIANO can be easily extended to solve the Sparse Multinomial Logistic Regression problem.
We also prove that PIANO converges to a stationary point of the Multinomial and the Sparse Multinomial Logistic Regression problems.
arXiv Detail & Related papers (2020-02-21T05:15:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.