Automated Learning of Interpretable Models with Quantified Uncertainty
- URL: http://arxiv.org/abs/2205.01626v1
- Date: Tue, 12 Apr 2022 19:56:42 GMT
- Title: Automated Learning of Interpretable Models with Quantified Uncertainty
- Authors: G.F. Bomarito and P.E. Leser and N.C.M Strauss and K.M. Garbrecht and
J.D. Hochhalter
- Abstract summary: We introduce a new framework for genetic-programming-based symbolic regression (GPSR)
GPSR uses model evidence to formulate replacement probability during the selection phase of evolution.
It is shown to increase interpretability, improve robustness to noise, and reduce overfitting when compared to a conventional GPSR implementation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Interpretability and uncertainty quantification in machine learning can
provide justification for decisions, promote scientific discovery and lead to a
better understanding of model behavior. Symbolic regression provides inherently
interpretable machine learning, but relatively little work has focused on the
use of symbolic regression on noisy data and the accompanying necessity to
quantify uncertainty. A new Bayesian framework for genetic-programming-based
symbolic regression (GPSR) is introduced that uses model evidence (i.e.,
marginal likelihood) to formulate replacement probability during the selection
phase of evolution. Model parameter uncertainty is automatically quantified,
enabling probabilistic predictions with each equation produced by the GPSR
algorithm. Model evidence is also quantified in this process, and its use is
shown to increase interpretability, improve robustness to noise, and reduce
overfitting when compared to a conventional GPSR implementation on both
numerical and physical experiments.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.