Deep Symbolic Optimization: Reinforcement Learning for Symbolic Mathematics
- URL: http://arxiv.org/abs/2505.10762v1
- Date: Fri, 16 May 2025 00:31:19 GMT
- Title: Deep Symbolic Optimization: Reinforcement Learning for Symbolic Mathematics
- Authors: Conor F. Hayes, Felipe Leno Da Silva, Jiachen Yang, T. Nathan Mundhenk, Chak Shing Lee, Jacob F. Pettit, Claudio Santiago, Sookyung Kim, Joanne T. Kim, Ignacio Aravena Solis, Ruben Glatt, Andre R. Goncalves, Alexander Ladd, Ahmet Can Solak, Thomas Desautels, Daniel Faissol, Brenden K. Petersen, Mikel Landajuela,
- Abstract summary: Deep Symbolic Optimization (DSO) is a novel computational framework that enables symbolic optimization for scientific discovery.<n>One notable example is equation discovery, which aims to automatically derive mathematical models expressed in symbolic form.<n>In this chapter, we provide a comprehensive overview of the DSO framework and illustrate its transformative potential for automating symbolic optimization in scientific discovery.
- Score: 43.622135148720886
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep Symbolic Optimization (DSO) is a novel computational framework that enables symbolic optimization for scientific discovery, particularly in applications involving the search for intricate symbolic structures. One notable example is equation discovery, which aims to automatically derive mathematical models expressed in symbolic form. In DSO, the discovery process is formulated as a sequential decision-making task. A generative neural network learns a probabilistic model over a vast space of candidate symbolic expressions, while reinforcement learning strategies guide the search toward the most promising regions. This approach integrates gradient-based optimization with evolutionary and local search techniques, and it incorporates in-situ constraints, domain-specific priors, and advanced policy optimization methods. The result is a robust framework capable of efficiently exploring extensive search spaces to identify interpretable and physically meaningful models. Extensive evaluations on benchmark problems have demonstrated that DSO achieves state-of-the-art performance in both accuracy and interpretability. In this chapter, we provide a comprehensive overview of the DSO framework and illustrate its transformative potential for automating symbolic optimization in scientific discovery.
Related papers
- Equation discovery framework EPDE: Towards a better equation discovery [50.79602839359522]
We enhance the EPDE algorithm -- an evolutionary optimization-based discovery framework.<n>Our approach generates terms using fundamental building blocks such as elementary functions and individual differentials.<n>We validate our algorithm's noise resilience and overall performance by comparing its results with those from the state-of-the-art equation discovery framework SINDy.
arXiv Detail & Related papers (2024-12-28T15:58:44Z) - A Survey on Inference Optimization Techniques for Mixture of Experts Models [50.40325411764262]
Large-scale Mixture of Experts (MoE) models offer enhanced model capacity and computational efficiency through conditional computation.<n> deploying and running inference on these models presents significant challenges in computational resources, latency, and energy efficiency.<n>This survey analyzes optimization techniques for MoE models across the entire system stack.
arXiv Detail & Related papers (2024-12-18T14:11:15Z) - Neuro-Symbolic Query Optimization in Knowledge Graphs [0.4915744683251151]
chapter delves into the emerging field of neuro-symbolic query optimization for knowledge graphs.
Recent advancements have introduced neural models, which capture non-linear aspects of query optimization.
We discuss the architecture of these hybrid systems, highlighting the interplay between neural and symbolic components.
arXiv Detail & Related papers (2024-11-21T16:31:27Z) - Deep Symbolic Optimization for Combinatorial Optimization: Accelerating Node Selection by Discovering Potential Heuristics [10.22111332588471]
We propose a novel deep symbolic optimization learning framework that combines their advantages.
Dso4NS guides the search for mathematical expressions within the high-dimensional discrete symbolic space and then incorporates the highest-performing mathematical expressions into a solver.
Experiments demonstrate the effectiveness of Dso4NS in learning high-quality expressions, outperforming existing approaches on a CPU machine.
arXiv Detail & Related papers (2024-06-14T06:02:14Z) - Generalizing the SINDy approach with nested neural networks [0.0]
Nested SINDy builds on the SINDy framework by introducing additional layers before and after the core SINDy layer.<n>We demonstrate the ability of the Nested SINDy approach to accurately find symbolic expressions for simple systems, and sparse (false but accurate) analytical representations for more complex systems.
arXiv Detail & Related papers (2024-04-24T09:01:50Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - A Novel Neural-symbolic System under Statistical Relational Learning [47.30190559449236]
We propose a neural-symbolic framework based on statistical relational learning, referred to as NSF-SRL.<n>Results of symbolic reasoning are utilized to refine and correct the predictions made by deep learning models, while deep learning models enhance the efficiency of the symbolic reasoning process.<n>We believe that this approach sets a new standard for neural-symbolic systems and will drive future research in the field of general artificial intelligence.
arXiv Detail & Related papers (2023-09-16T09:15:37Z) - Efficient Model-Free Exploration in Low-Rank MDPs [76.87340323826945]
Low-Rank Markov Decision Processes offer a simple, yet expressive framework for RL with function approximation.
Existing algorithms are either (1) computationally intractable, or (2) reliant upon restrictive statistical assumptions.
We propose the first provably sample-efficient algorithm for exploration in Low-Rank MDPs.
arXiv Detail & Related papers (2023-07-08T15:41:48Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - Goal Agnostic Planning using Maximum Likelihood Paths in Hypergraph
World Models [1.370633147306388]
We present a hypergraph-based machine learning algorithm, a datastructure--driven maintenance method, and a planning algorithm based on a probabilistic application of Dijkstra's algorithm.
We prove that the algorithm determines optimal solutions within the problem space, mathematically bound learning performance, and supply a mathematical model analyzing system state progression through time.
arXiv Detail & Related papers (2021-10-18T16:22:33Z) - Improving exploration in policy gradient search: Application to symbolic
optimization [6.344988093245026]
Many machine learning strategies leverage neural networks to search large spaces of mathematical symbols.
In contrast to traditional evolutionary approaches, using a neural network at the core of the search allows learning higher-level symbolic patterns.
We show that these techniques can improve the performance, increase sample efficiency, and lower the complexity of solutions for the task of symbolic regression.
arXiv Detail & Related papers (2021-07-19T21:11:07Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.