GFN-SR: Symbolic Regression with Generative Flow Networks
- URL: http://arxiv.org/abs/2312.00396v1
- Date: Fri, 1 Dec 2023 07:38:05 GMT
- Title: GFN-SR: Symbolic Regression with Generative Flow Networks
- Authors: Sida Li, Ioana Marinescu, Sebastian Musslick
- Abstract summary: In recent years, deep symbolic regression (DSR) has emerged as a popular method in the field.
We propose an alternative framework (GFN-SR) to approach SR with deep learning.
GFN-SR is capable of generating a diverse set of best-fitting expressions.
- Score: 0.9208007322096533
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Symbolic regression (SR) is an area of interpretable machine learning that
aims to identify mathematical expressions, often composed of simple functions,
that best fit in a given set of covariates $X$ and response $y$. In recent
years, deep symbolic regression (DSR) has emerged as a popular method in the
field by leveraging deep reinforcement learning to solve the complicated
combinatorial search problem. In this work, we propose an alternative framework
(GFN-SR) to approach SR with deep learning. We model the construction of an
expression tree as traversing through a directed acyclic graph (DAG) so that
GFlowNet can learn a stochastic policy to generate such trees sequentially.
Enhanced with an adaptive reward baseline, our method is capable of generating
a diverse set of best-fitting expressions. Notably, we observe that GFN-SR
outperforms other SR algorithms in noisy data regimes, owing to its ability to
learn a distribution of rewards over a space of candidate solutions.
Related papers
- ISR: Invertible Symbolic Regression [7.499800486499609]
Invertible Symbolic Regression is a machine learning technique that generates analytical relationships between inputs and outputs of a given dataset via invertible maps.
We transform the affine coupling blocks of INNs into a symbolic framework, resulting in an end-to-end differentiable symbolic invertible architecture.
We show that ISR can serve as a (symbolic) normalizing flow for density estimation tasks.
arXiv Detail & Related papers (2024-05-10T23:20:46Z) - Intelligent Hybrid Resource Allocation in MEC-assisted RAN Slicing Network [72.2456220035229]
We aim to maximize the SSR for heterogeneous service demands in the cooperative MEC-assisted RAN slicing system.
We propose a recurrent graph reinforcement learning (RGRL) algorithm to intelligently learn the optimal hybrid RA policy.
arXiv Detail & Related papers (2024-05-02T01:36:13Z) - SymbolNet: Neural Symbolic Regression with Adaptive Dynamic Pruning [1.0356366043809717]
We propose a neural network approach to symbolic regression in a novel framework that allows dynamic pruning of model weights, input features, and mathematical operators in a single training process.
Our approach enables symbolic regression to achieve fast inference with nanosecond-scale latency on FPGAs for high-dimensional datasets in environments with stringent computational resource constraints.
arXiv Detail & Related papers (2024-01-18T12:51:38Z) - Deep Generative Symbolic Regression [83.04219479605801]
Symbolic regression aims to discover concise closed-form mathematical equations from data.
Existing methods, ranging from search to reinforcement learning, fail to scale with the number of input variables.
We propose an instantiation of our framework, Deep Generative Symbolic Regression.
arXiv Detail & Related papers (2023-12-30T17:05:31Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - Deep Generative Symbolic Regression with Monte-Carlo-Tree-Search [29.392036559507755]
Symbolic regression is a problem of learning a symbolic expression from numerical data.
Deep neural models trained on procedurally-generated synthetic datasets showed competitive performance.
We propose a novel method which provides the best of both worlds, based on a Monte-Carlo Tree Search procedure.
arXiv Detail & Related papers (2023-02-22T09:10:20Z) - Efficient Generator of Mathematical Expressions for Symbolic Regression [0.0]
We propose an approach to symbolic regression based on a novel variational autoencoder for generating hierarchical structures, HVAE.
HVAE can be trained efficiently with small corpora of mathematical expressions and can accurately encode expressions into a smooth low-dimensional latent space.
Finally, EDHiE system for symbolic regression, which applies an evolutionary algorithm to the latent space of HVAE, reconstructs equations from a standard symbolic regression benchmark better than a state-of-the-art system based on a similar combination of deep learning and evolutionary algorithms.
arXiv Detail & Related papers (2023-02-20T10:40:29Z) - GSR: A Generalized Symbolic Regression Approach [13.606672419862047]
Generalized Symbolic Regression presented in this paper.
We show that our GSR method outperforms several state-of-the-art methods on the well-known Symbolic Regression benchmark problem sets.
We highlight the strengths of GSR by introducing SymSet, a new SR benchmark set which is more challenging relative to the existing benchmarks.
arXiv Detail & Related papers (2022-05-31T07:20:17Z) - Hierarchical Sketch Induction for Paraphrase Generation [79.87892048285819]
We introduce Hierarchical Refinement Quantized Variational Autoencoders (HRQ-VAE), a method for learning decompositions of dense encodings.
We use HRQ-VAE to encode the syntactic form of an input sentence as a path through the hierarchy, allowing us to more easily predict syntactic sketches at test time.
arXiv Detail & Related papers (2022-03-07T15:28:36Z) - DASHA: Distributed Nonconvex Optimization with Communication
Compression, Optimal Oracle Complexity, and No Client Synchronization [77.34726150561087]
We develop and analyze DASHA: a new family of methods for noneps distributed optimization problems.
Unlike MARINA, the new methods DASHA, DASHA-MVR send compressed vectors only and never synchronize the nodes, which makes them more practical for learning.
arXiv Detail & Related papers (2022-02-02T20:10:40Z) - Edge Rewiring Goes Neural: Boosting Network Resilience via Policy
Gradient [62.660451283548724]
ResiNet is a reinforcement learning framework to discover resilient network topologies against various disasters and attacks.
We show that ResiNet achieves a near-optimal resilience gain on multiple graphs while balancing the utility, with a large margin compared to existing approaches.
arXiv Detail & Related papers (2021-10-18T06:14:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.