Improved identification accuracy in equation learning via comprehensive
$\boldsymbol{R^2}$-elimination and Bayesian model selection
- URL: http://arxiv.org/abs/2311.13265v2
- Date: Mon, 27 Nov 2023 09:40:19 GMT
- Title: Improved identification accuracy in equation learning via comprehensive
$\boldsymbol{R^2}$-elimination and Bayesian model selection
- Authors: Daniel Nickelsen and Bubacarr Bah
- Abstract summary: We present an approach that strikes a balance between comprehensiveness and efficiency in equation learning.
Inspired by stepwise regression, our approach combines the coefficient of determination, $R2$, and the Bayesian model evidence, $p(boldsymbol y|mathcal M)$, in a novel way.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the field of equation learning, exhaustively considering all possible
equations derived from a basis function dictionary is infeasible. Sparse
regression and greedy algorithms have emerged as popular approaches to tackle
this challenge. However, the presence of multicollinearity poses difficulties
for sparse regression techniques, and greedy steps may inadvertently exclude
terms of the true equation, leading to reduced identification accuracy. In this
article, we present an approach that strikes a balance between
comprehensiveness and efficiency in equation learning. Inspired by stepwise
regression, our approach combines the coefficient of determination, $R^2$, and
the Bayesian model evidence, $p(\boldsymbol y|\mathcal M)$, in a novel way. Our
procedure is characterized by a comprehensive search with just a minor
reduction of the model space at each iteration step. With two flavors of our
approach and the adoption of $p(\boldsymbol y|\mathcal M)$ for bi-directional
stepwise regression, we present a total of three new avenues for equation
learning. Through three extensive numerical experiments involving random
polynomials and dynamical systems, we compare our approach against four
state-of-the-art methods and two standard approaches. The results demonstrate
that our comprehensive search approach surpasses all other methods in terms of
identification accuracy. In particular, the second flavor of our approach
establishes an efficient overfitting penalty solely based on $R^2$, which
achieves highest rates of exact equation recovery.
Related papers
- Reinforcement Learning from Human Feedback without Reward Inference: Model-Free Algorithm and Instance-Dependent Analysis [16.288866201806382]
We develop a model-free RLHF best policy identification algorithm, called $mathsfBSAD$, without explicit reward model inference.
The algorithm identifies the optimal policy directly from human preference information in a backward manner.
arXiv Detail & Related papers (2024-06-11T17:01:41Z) - Deep Generative Symbolic Regression [83.04219479605801]
Symbolic regression aims to discover concise closed-form mathematical equations from data.
Existing methods, ranging from search to reinforcement learning, fail to scale with the number of input variables.
We propose an instantiation of our framework, Deep Generative Symbolic Regression.
arXiv Detail & Related papers (2023-12-30T17:05:31Z) - Oracle Complexity Reduction for Model-free LQR: A Stochastic
Variance-Reduced Policy Gradient Approach [4.422315636150272]
We investigate the problem of learning an $epsilon$-approximate solution for the discrete-time Linear Quadratic Regulator (LQR) problem.
Our method combines both one-point and two-point estimations in a dual-loop variance-reduced algorithm.
arXiv Detail & Related papers (2023-09-19T15:03:18Z) - Improving Sample Efficiency of Model-Free Algorithms for Zero-Sum Markov Games [66.2085181793014]
We show that a model-free stage-based Q-learning algorithm can enjoy the same optimality in the $H$ dependence as model-based algorithms.
Our algorithm features a key novel design of updating the reference value functions as the pair of optimistic and pessimistic value functions.
arXiv Detail & Related papers (2023-08-17T08:34:58Z) - Retire: Robust Expectile Regression in High Dimensions [3.9391041278203978]
Penalized quantile and expectile regression methods offer useful tools to detect heteroscedasticity in high-dimensional data.
We propose and study (penalized) robust expectile regression (retire)
We show that the proposed procedure can be efficiently solved by a semismooth Newton coordinate descent algorithm.
arXiv Detail & Related papers (2022-12-11T18:03:12Z) - Vector-Valued Least-Squares Regression under Output Regularity
Assumptions [73.99064151691597]
We propose and analyse a reduced-rank method for solving least-squares regression problems with infinite dimensional output.
We derive learning bounds for our method, and study under which setting statistical performance is improved in comparison to full-rank method.
arXiv Detail & Related papers (2022-11-16T15:07:00Z) - Breaking the Sample Complexity Barrier to Regret-Optimal Model-Free
Reinforcement Learning [52.76230802067506]
A novel model-free algorithm is proposed to minimize regret in episodic reinforcement learning.
The proposed algorithm employs an em early-settled reference update rule, with the aid of two Q-learning sequences.
The design principle of our early-settled variance reduction method might be of independent interest to other RL settings.
arXiv Detail & Related papers (2021-10-09T21:13:48Z) - Least-Squares Linear Dilation-Erosion Regressor Trained using Stochastic
Descent Gradient or the Difference of Convex Methods [2.055949720959582]
We present a hybrid morphological neural network for regression tasks called linear dilation-erosion regression ($ell$-DER)
An $ell$-DER model is given by a convex combination of the composition of linear and morphological elementary operators.
arXiv Detail & Related papers (2021-07-12T18:41:59Z) - Neural Symbolic Regression that Scales [58.45115548924735]
We introduce the first symbolic regression method that leverages large scale pre-training.
We procedurally generate an unbounded set of equations, and simultaneously pre-train a Transformer to predict the symbolic equation from a corresponding set of input-output-pairs.
arXiv Detail & Related papers (2021-06-11T14:35:22Z) - Path Sample-Analytic Gradient Estimators for Stochastic Binary Networks [78.76880041670904]
In neural networks with binary activations and or binary weights the training by gradient descent is complicated.
We propose a new method for this estimation problem combining sampling and analytic approximation steps.
We experimentally show higher accuracy in gradient estimation and demonstrate a more stable and better performing training in deep convolutional models.
arXiv Detail & Related papers (2020-06-04T21:51:21Z) - An implicit function learning approach for parametric modal regression [36.568208312835196]
We use implicit function theorem to develop an objective, for learning a joint function over inputs and targets.
We empirically demonstrate on several synthetic problems that our method can learn multi-valued functions and produce the conditional modes.
arXiv Detail & Related papers (2020-02-14T00:37:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.