Vertical Symbolic Regression via Deep Policy Gradient
- URL: http://arxiv.org/abs/2402.00254v1
- Date: Thu, 1 Feb 2024 00:54:48 GMT
- Title: Vertical Symbolic Regression via Deep Policy Gradient
- Authors: Nan Jiang, Md Nasim, Yexiang Xue
- Abstract summary: We propose Vertical Symbolic Regression using Deep Policy Gradient (VSR-DPG)
Our VSR-DPG models symbolic regression as a sequential decision-making process, in which equations are built from repeated applications of grammar rules.
- Score: 18.7083987727973
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Vertical Symbolic Regression (VSR) recently has been proposed to expedite the
discovery of symbolic equations with many independent variables from
experimental data. VSR reduces the search spaces following the vertical
discovery path by building from reduced-form equations involving a subset of
independent variables to full-fledged ones. Proved successful by many symbolic
regressors, deep neural networks are expected to further scale up VSR.
Nevertheless, directly combining VSR with deep neural networks will result in
difficulty in passing gradients and other engineering issues. We propose
Vertical Symbolic Regression using Deep Policy Gradient (VSR-DPG) and
demonstrate that VSR-DPG can recover ground-truth equations involving multiple
input variables, significantly beyond both deep reinforcement learning-based
approaches and previous VSR variants. Our VSR-DPG models symbolic regression as
a sequential decision-making process, in which equations are built from
repeated applications of grammar rules. The integrated deep model is trained to
maximize a policy gradient objective. Experimental results demonstrate that our
VSR-DPG significantly outperforms popular baselines in identifying both
algebraic equations and ordinary differential equations on a series of
benchmarks.
Related papers
- Deep Generative Symbolic Regression [83.04219479605801]
Symbolic regression aims to discover concise closed-form mathematical equations from data.
Existing methods, ranging from search to reinforcement learning, fail to scale with the number of input variables.
We propose an instantiation of our framework, Deep Generative Symbolic Regression.
arXiv Detail & Related papers (2023-12-30T17:05:31Z) - Anchor Data Augmentation [53.39044919864444]
We propose a novel algorithm for data augmentation in nonlinear over-parametrized regression.
Our data augmentation algorithm borrows from the literature on causality and extends the recently proposed Anchor regression (AR) method for data augmentation.
arXiv Detail & Related papers (2023-11-12T21:08:43Z) - Reflected Diffusion Models [93.26107023470979]
We present Reflected Diffusion Models, which reverse a reflected differential equation evolving on the support of the data.
Our approach learns the score function through a generalized score matching loss and extends key components of standard diffusion models.
arXiv Detail & Related papers (2023-04-10T17:54:38Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Sufficient Dimension Reduction for High-Dimensional Regression and
Low-Dimensional Embedding: Tutorial and Survey [5.967999555890417]
This is a tutorial and survey paper on various methods for Sufficient Dimension Reduction (SDR)
We cover these methods with both statistical high-dimensional regression perspective and machine learning approach for dimensionality reduction.
arXiv Detail & Related papers (2021-10-18T21:05:08Z) - A Cram\'er Distance perspective on Non-crossing Quantile Regression in
Distributional Reinforcement Learning [2.28438857884398]
Quantile-based methods like QR-DQN project arbitrary distributions onto a parametric subset of staircase distributions.
We show that monotonicity constraints on the quantiles have been shown to improve the performance of QR-DQN for uncertainty-based exploration strategies.
We propose a novel non-crossing neural architecture that allows a good training performance using a novel algorithm to compute the Cram'er distance.
arXiv Detail & Related papers (2021-10-01T17:00:25Z) - LAPAR: Linearly-Assembled Pixel-Adaptive Regression Network for Single
Image Super-Resolution and Beyond [75.37541439447314]
Single image super-resolution (SISR) deals with a fundamental problem of upsampling a low-resolution (LR) image to its high-resolution (HR) version.
This paper proposes a linearly-assembled pixel-adaptive regression network (LAPAR) to strike a sweet spot of deep model complexity and resulting SISR quality.
arXiv Detail & Related papers (2021-05-21T15:47:18Z) - Robust Kernel-based Distribution Regression [13.426195476348955]
We study distribution regression (DR) which involves two stages of sampling, and aims at regressing from probability measures to real-valued responses over a kernel reproducing Hilbert space (RKHS)
By introducing a robust loss function $l_sigma$ for two-stage sampling problems, we present a novel robust distribution regression (RDR) scheme.
arXiv Detail & Related papers (2021-04-21T17:03:46Z) - Fast OSCAR and OWL Regression via Safe Screening Rules [97.28167655721766]
Ordered $L_1$ (OWL) regularized regression is a new regression analysis for high-dimensional sparse learning.
Proximal gradient methods are used as standard approaches to solve OWL regression.
We propose the first safe screening rule for OWL regression by exploring the order of the primal solution with the unknown order structure.
arXiv Detail & Related papers (2020-06-29T23:35:53Z) - Sample-based Distributional Policy Gradient [14.498314462218394]
We propose sample-based distributional policy gradient (SDPG) algorithm for continuous action space control settings.
We show that our algorithm shows better sample efficiency as well as higher reward for most tasks.
We apply SDPG and D4PG to multiple OpenAI Gym environments and observe that our algorithm shows better sample efficiency as well as higher reward for most tasks.
arXiv Detail & Related papers (2020-01-08T17:50:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.