Explainable AI Insights for Symbolic Computation: A case study on
selecting the variable ordering for cylindrical algebraic decomposition
- URL: http://arxiv.org/abs/2304.12154v2
- Date: Tue, 29 Aug 2023 11:19:40 GMT
- Title: Explainable AI Insights for Symbolic Computation: A case study on
selecting the variable ordering for cylindrical algebraic decomposition
- Authors: Lynn Pickering, Tereso Del Rio Almajano, Matthew England and Kelly
Cohen
- Abstract summary: This paper explores whether using explainable AI (XAI) techniques on such machine learning models can offer new insight for symbolic computation.
We present a case study on the use of ML to select the variable ordering for cylindrical algebraic decomposition.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years there has been increased use of machine learning (ML)
techniques within mathematics, including symbolic computation where it may be
applied safely to optimise or select algorithms. This paper explores whether
using explainable AI (XAI) techniques on such ML models can offer new insight
for symbolic computation, inspiring new implementations within computer algebra
systems that do not directly call upon AI tools. We present a case study on the
use of ML to select the variable ordering for cylindrical algebraic
decomposition. It has already been demonstrated that ML can make the choice
well, but here we show how the SHAP tool for explainability can be used to
inform new heuristics of a size and complexity similar to those human-designed
heuristics currently commonly used in symbolic computation.
Related papers
- LLMs for XAI: Future Directions for Explaining Explanations [50.87311607612179]
We focus on refining explanations computed using existing XAI algorithms.
Initial experiments and user study suggest that LLMs offer a promising way to enhance the interpretability and usability of XAI.
arXiv Detail & Related papers (2024-05-09T19:17:47Z) - Constrained Neural Networks for Interpretable Heuristic Creation to Optimise Computer Algebra Systems [2.8402080392117757]
We present a new methodology for utilising machine learning technology in symbolic computation research.
We explain how a well known human-designed variable ordering in cylindrical decomposition may be represented as a constrained neural network.
This allows us to then use machine learning methods to further optimise, leading to new networks of similar size as the original human-designed one.
arXiv Detail & Related papers (2024-04-26T16:20:04Z) - Symbolic Integration Algorithm Selection with Machine Learning: LSTMs vs Tree LSTMs [0.0]
We trained an LSTM and a TreeLSTM model for sub-algorithm prediction and compared them to Maple's existing approach.
Our TreeLSTM performs much better than the LSTM, highlighting the benefit of using an informed representation of mathematical expressions.
arXiv Detail & Related papers (2024-04-23T12:27:20Z) - Lessons on Datasets and Paradigms in Machine Learning for Symbolic Computation: A Case Study on CAD [0.0]
This study reports lessons on the importance of analysing datasets prior to machine learning.
We present results for a particular case study, the selection of variable ordering for cylindrical algebraic decomposition.
We introduce an augmentation technique for systems that allows us to balance and further augment the dataset.
arXiv Detail & Related papers (2024-01-24T10:12:43Z) - Machine Learning Insides OptVerse AI Solver: Design Principles and
Applications [74.67495900436728]
We present a comprehensive study on the integration of machine learning (ML) techniques into Huawei Cloud's OptVerse AI solver.
We showcase our methods for generating complex SAT and MILP instances utilizing generative models that mirror multifaceted structures of real-world problem.
We detail the incorporation of state-of-the-art parameter tuning algorithms which markedly elevate solver performance.
arXiv Detail & Related papers (2024-01-11T15:02:15Z) - CoLA: Exploiting Compositional Structure for Automatic and Efficient
Numerical Linear Algebra [62.37017125812101]
We propose a simple but general framework for large-scale linear algebra problems in machine learning, named CoLA.
By combining a linear operator abstraction with compositional dispatch rules, CoLA automatically constructs memory and runtime efficient numerical algorithms.
We showcase its efficacy across a broad range of applications, including partial differential equations, Gaussian processes, equivariant model construction, and unsupervised learning.
arXiv Detail & Related papers (2023-09-06T14:59:38Z) - Explainable AI for tool wear prediction in turning [3.391256280235937]
This research aims develop an Explainable Artificial Intelligence (XAI) framework to facilitate human-understandable solutions for tool wear prediction during turning.
A random forest algorithm was used as the supervised Machine Learning (ML) classifier for training and binary classification.
The Shapley criterion was used to explain the predictions of the trained ML classifier.
arXiv Detail & Related papers (2023-08-17T03:36:13Z) - Statistically Meaningful Approximation: a Case Study on Approximating
Turing Machines with Transformers [50.85524803885483]
This work proposes a formal definition of statistically meaningful (SM) approximation which requires the approximating network to exhibit good statistical learnability.
We study SM approximation for two function classes: circuits and Turing machines.
arXiv Detail & Related papers (2021-07-28T04:28:55Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - A Survey on Large-scale Machine Learning [67.6997613600942]
Machine learning can provide deep insights into data, allowing machines to make high-quality predictions.
Most sophisticated machine learning approaches suffer from huge time costs when operating on large-scale data.
Large-scale Machine Learning aims to learn patterns from big data with comparable performance efficiently.
arXiv Detail & Related papers (2020-08-10T06:07:52Z) - A machine learning based software pipeline to pick the variable ordering
for algorithms with polynomial inputs [1.2891210250935146]
We refer to choices which have no effect on the mathematical correctness of the software, but do impact its performance.
In the past we experimented with one such choice: the variable ordering to use when building a Cylindrical Algebraic Decomposition (CAD)
arXiv Detail & Related papers (2020-05-22T16:00:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.