Bagged Polynomial Regression and Neural Networks
- URL: http://arxiv.org/abs/2205.08609v1
- Date: Tue, 17 May 2022 19:55:56 GMT
- Title: Bagged Polynomial Regression and Neural Networks
- Authors: Sylvia Klosin and Jaume Vives-i-Bastida
- Abstract summary: Series and dataset regression are able to approximate the same function classes as neural networks.
We propose the use of bagged regression (BPR) as an attractive alternative to neural networks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Series and polynomial regression are able to approximate the same function
classes as neural networks. However, these methods are rarely used in practice,
although they offer more interpretability than neural networks. In this paper,
we show that a potential reason for this is the slow convergence rate of
polynomial regression estimators and propose the use of bagged polynomial
regression (BPR) as an attractive alternative to neural networks.
Theoretically, we derive new finite sample and asymptotic $L^2$ convergence
rates for series estimators. We show that the rates can be improved in smooth
settings by splitting the feature space and generating polynomial features
separately for each partition. Empirically, we show that our proposed
estimator, the BPR, can perform as well as more complex models with more
parameters. Our estimator also performs close to state-of-the-art prediction
methods in the benchmark MNIST handwritten digit dataset.
Related papers
- The Convex Landscape of Neural Networks: Characterizing Global Optima
and Stationary Points via Lasso Models [75.33431791218302]
Deep Neural Network Network (DNN) models are used for programming purposes.
In this paper we examine the use of convex neural recovery models.
We show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
We also show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
arXiv Detail & Related papers (2023-12-19T23:04:56Z) - GINN-LP: A Growing Interpretable Neural Network for Discovering
Multivariate Laurent Polynomial Equations [1.1142444517901018]
We propose GINN-LP, an interpretable neural network, to discover the form of a Laurent Polynomial equation.
To the best of our knowledge, this is the first neural network that can discover arbitrary terms without any prior information on the order.
We show that GINN-LP outperforms the state-of-theart symbolic regression methods on datasets.
arXiv Detail & Related papers (2023-12-18T03:44:29Z) - Structured Radial Basis Function Network: Modelling Diversity for
Multiple Hypotheses Prediction [51.82628081279621]
Multi-modal regression is important in forecasting nonstationary processes or with a complex mixture of distributions.
A Structured Radial Basis Function Network is presented as an ensemble of multiple hypotheses predictors for regression problems.
It is proved that this structured model can efficiently interpolate this tessellation and approximate the multiple hypotheses target distribution.
arXiv Detail & Related papers (2023-09-02T01:27:53Z) - Bayesian polynomial neural networks and polynomial neural ordinary
differential equations [4.550705124365277]
Symbolic regression with neural networks and neural ordinary differential equations (ODEs) are powerful approaches for equation recovery of many science and engineering problems.
These methods provide point estimates for the model parameters and are currently unable to accommodate noisy data.
We address this challenge by developing and validating the following inference methods: the Laplace approximation, Markov Chain Monte Carlo sampling methods, and Bayesian variational inference.
arXiv Detail & Related papers (2023-08-17T05:42:29Z) - Nonparametric regression using over-parameterized shallow ReLU neural networks [10.339057554827392]
We show that neural networks can achieve minimax optimal rates of convergence (up to logarithmic factors) for learning functions from certain smooth function classes.
It is assumed that the regression function is from the H"older space with smoothness $alpha(d+3)/2$ or a variation space corresponding to shallow neural networks.
As a byproduct, we derive a new size-independent bound for the local Rademacher complexity of shallow ReLU neural networks.
arXiv Detail & Related papers (2023-06-14T07:42:37Z) - A predictive physics-aware hybrid reduced order model for reacting flows [65.73506571113623]
A new hybrid predictive Reduced Order Model (ROM) is proposed to solve reacting flow problems.
The number of degrees of freedom is reduced from thousands of temporal points to a few POD modes with their corresponding temporal coefficients.
Two different deep learning architectures have been tested to predict the temporal coefficients.
arXiv Detail & Related papers (2023-01-24T08:39:20Z) - Towards a mathematical framework to inform Neural Network modelling via
Polynomial Regression [0.0]
It is shown that almost identical predictions can be made when certain conditions are met locally.
When learning from generated data, the proposed method producess that approximate correctly the data locally.
arXiv Detail & Related papers (2021-02-07T17:56:16Z) - Sparsely constrained neural networks for model discovery of PDEs [0.0]
We present a modular framework that determines the sparsity pattern of a deep-learning based surrogate using any sparse regression technique.
We show how a different network architecture and sparsity estimator improve model discovery accuracy and convergence on several benchmark examples.
arXiv Detail & Related papers (2020-11-09T11:02:40Z) - Probabilistic Circuits for Variational Inference in Discrete Graphical
Models [101.28528515775842]
Inference in discrete graphical models with variational methods is difficult.
Many sampling-based methods have been proposed for estimating Evidence Lower Bound (ELBO)
We propose a new approach that leverages the tractability of probabilistic circuit models, such as Sum Product Networks (SPN)
We show that selective-SPNs are suitable as an expressive variational distribution, and prove that when the log-density of the target model is aweighted the corresponding ELBO can be computed analytically.
arXiv Detail & Related papers (2020-10-22T05:04:38Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.