Symbolic Integration Algorithm Selection with Machine Learning: LSTMs vs Tree LSTMs
- URL: http://arxiv.org/abs/2404.14973v1
- Date: Tue, 23 Apr 2024 12:27:20 GMT
- Title: Symbolic Integration Algorithm Selection with Machine Learning: LSTMs vs Tree LSTMs
- Authors: Rashid Barket, Matthew England, Jürgen Gerhard,
- Abstract summary: We trained an LSTM and a TreeLSTM model for sub-algorithm prediction and compared them to Maple's existing approach.
Our TreeLSTM performs much better than the LSTM, highlighting the benefit of using an informed representation of mathematical expressions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computer Algebra Systems (e.g. Maple) are used in research, education, and industrial settings. One of their key functionalities is symbolic integration, where there are many sub-algorithms to choose from that can affect the form of the output integral, and the runtime. Choosing the right sub-algorithm for a given problem is challenging: we hypothesise that Machine Learning can guide this sub-algorithm choice. A key consideration of our methodology is how to represent the mathematics to the ML model: we hypothesise that a representation which encodes the tree structure of mathematical expressions would be well suited. We trained both an LSTM and a TreeLSTM model for sub-algorithm prediction and compared them to Maple's existing approach. Our TreeLSTM performs much better than the LSTM, highlighting the benefit of using an informed representation of mathematical expressions. It is able to produce better outputs than Maple's current state-of-the-art meta-algorithm, giving a strong basis for further research.
Related papers
- AlphaIntegrator: Transformer Action Search for Symbolic Integration Proofs [3.618534280726541]
We present the first correct-by-construction learning-based system for step-by-step mathematical integration.
The key idea is to learn a policy, represented by a GPT transformer model, which guides the search for the right mathematical integration rule.
We introduce a symbolic engine with axiomatically correct actions on mathematical expressions, as well as the first dataset for step-by-step integration.
arXiv Detail & Related papers (2024-10-03T16:50:30Z) - Exploring and Learning in Sparse Linear MDPs without Computationally
Intractable Oracles [39.10180309328293]
In this paper we revisit linear MDPs from the perspective of feature selection.
Our main result is the first-time algorithm for this problem.
We show that they do exist and can be computed efficiently via convex programming.
arXiv Detail & Related papers (2023-09-18T03:35:48Z) - Provably Efficient Representation Learning with Tractable Planning in
Low-Rank POMDP [81.00800920928621]
We study representation learning in partially observable Markov Decision Processes (POMDPs)
We first present an algorithm for decodable POMDPs that combines maximum likelihood estimation (MLE) and optimism in the face of uncertainty (OFU)
We then show how to adapt this algorithm to also work in the broader class of $gamma$-observable POMDPs.
arXiv Detail & Related papers (2023-06-21T16:04:03Z) - Explainable AI Insights for Symbolic Computation: A case study on
selecting the variable ordering for cylindrical algebraic decomposition [0.0]
This paper explores whether using explainable AI (XAI) techniques on such machine learning models can offer new insight for symbolic computation.
We present a case study on the use of ML to select the variable ordering for cylindrical algebraic decomposition.
arXiv Detail & Related papers (2023-04-24T15:05:04Z) - Learning Hidden Markov Models Using Conditional Samples [72.20944611510198]
This paper is concerned with the computational complexity of learning the Hidden Markov Model (HMM)
In this paper, we consider an interactive access model, in which the algorithm can query for samples from the conditional distributions of the HMMs.
Specifically, we obtain efficient algorithms for learning HMMs in settings where we have query access to the exact conditional probabilities.
arXiv Detail & Related papers (2023-02-28T16:53:41Z) - Structure-Unified M-Tree Coding Solver for MathWord Problem [57.825176412485504]
In previous work, models designed by taking into account the properties of the binary tree structure of mathematical expressions at the output side have achieved better performance.
In this paper, we propose the Structure-Unified M-Tree Coding Coding (S-UMCr), which applies a tree with any M branches (M-tree) to unify the output structures.
Experimental results on the widely used MAWPS and Math23K datasets have demonstrated that SUMC-r not only outperforms several state-of-the-art models but also performs much better under low-resource conditions.
arXiv Detail & Related papers (2022-10-22T12:20:36Z) - Linear Temporal Logic Modulo Theories over Finite Traces (Extended
Version) [72.38188258853155]
This paper studies Linear Temporal Logic over Finite Traces (LTLf)
proposition letters are replaced with first-order formulas interpreted over arbitrary theories.
The resulting logic, called Satisfiability Modulo Theories (LTLfMT), is semi-decidable.
arXiv Detail & Related papers (2022-04-28T17:57:33Z) - ConvMath: A Convolutional Sequence Network for Mathematical Expression
Recognition [11.645568743440087]
The performance of ConvMath is evaluated on an open dataset named IM2LATEX-100K, including 103556 samples.
The proposed network achieves state-of-the-art accuracy and much better efficiency than previous methods.
arXiv Detail & Related papers (2020-12-23T12:08:18Z) - Online Model Selection for Reinforcement Learning with Function
Approximation [50.008542459050155]
We present a meta-algorithm that adapts to the optimal complexity with $tildeO(L5/6 T2/3)$ regret.
We also show that the meta-algorithm automatically admits significantly improved instance-dependent regret bounds.
arXiv Detail & Related papers (2020-11-19T10:00:54Z) - White-box Induction From SVM Models: Explainable AI with Logic
Programming [6.396288020763143]
We focus on inducing logic programs that explain models learned by the support vector machine (SVM) algorithm.
We use SHAP, an example-specific interpreter in explainable AI, to determine a relevant set of features.
This approach yields an algorithm that captures SVM model's underlying logic and outperforms %GG: the FOIL algorithm.
arXiv Detail & Related papers (2020-08-09T23:07:14Z) - Model Selection in Contextual Stochastic Bandit Problems [51.94632035240787]
We develop a meta-algorithm that selects between base algorithms.
We show through a lower bound that even when one of the base algorithms has $O(sqrtT)$ regret, in general it is impossible to get better than $Omega(sqrtT)$ regret.
arXiv Detail & Related papers (2020-03-03T18:46:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.