Symbolic Metamodels for Interpreting Black-boxes Using Primitive
Functions
- URL: http://arxiv.org/abs/2302.04791v1
- Date: Thu, 9 Feb 2023 17:30:43 GMT
- Title: Symbolic Metamodels for Interpreting Black-boxes Using Primitive
Functions
- Authors: Mahed Abroshan, Saumitra Mishra, Mohammad Mahdi Khalili
- Abstract summary: One approach for interpreting black-box machine learning models is to find a global approximation of the model using simple interpretable functions.
In this work, we propose a new method for finding interpretable metamodels.
- Score: 15.727276506140878
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: One approach for interpreting black-box machine learning models is to find a
global approximation of the model using simple interpretable functions, which
is called a metamodel (a model of the model). Approximating the black-box with
a metamodel can be used to 1) estimate instance-wise feature importance; 2)
understand the functional form of the model; 3) analyze feature interactions.
In this work, we propose a new method for finding interpretable metamodels. Our
approach utilizes Kolmogorov superposition theorem, which expresses
multivariate functions as a composition of univariate functions (our primitive
parameterized functions). This composition can be represented in the form of a
tree. Inspired by symbolic regression, we use a modified form of genetic
programming to search over different tree configurations. Gradient descent (GD)
is used to optimize the parameters of a given configuration. Our method is a
novel memetic algorithm that uses GD not only for training numerical constants
but also for the training of building blocks. Using several experiments, we
show that our method outperforms recent metamodeling approaches suggested for
interpreting black-boxes.
Related papers
- Explaining Datasets in Words: Statistical Models with Natural Language Parameters [66.69456696878842]
We introduce a family of statistical models -- including clustering, time series, and classification models -- parameterized by natural language predicates.
We apply our framework to a wide range of problems: taxonomizing user chat dialogues, characterizing how they evolve across time, finding categories where one language model is better than the other.
arXiv Detail & Related papers (2024-09-13T01:40:20Z) - Interpretability in Symbolic Regression: a benchmark of Explanatory Methods using the Feynman data set [0.0]
Interpretability of machine learning models plays a role as important as the model accuracy.
This paper proposes a benchmark scheme to evaluate explanatory methods to explain regression models.
Results have shown that Symbolic Regression models can be an interesting alternative to white-box and black-box models.
arXiv Detail & Related papers (2024-04-08T23:46:59Z) - Learning Green's Function Efficiently Using Low-Rank Approximations [44.46178415547532]
A practical limitation of using deep learning for the Green's function is the repeated computationally expensive Monte-Carlo integral approximations.
We propose to learn the Green's function by low-rank decomposition, which results in a novel architecture to remove redundant computations.
arXiv Detail & Related papers (2023-08-01T07:43:46Z) - Can Explanations Be Useful for Calibrating Black Box Models? [31.473798197405948]
We study how to improve a black box model's performance on a new domain given examples from the new domain.
Our approach first extracts a set of features combining human intuition about the task with model attributions.
We show that the calibration features transfer to some extent between tasks and shed light on how to effectively use them.
arXiv Detail & Related papers (2021-10-14T17:48:16Z) - Differentiable Spline Approximations [48.10988598845873]
Differentiable programming has significantly enhanced the scope of machine learning.
Standard differentiable programming methods (such as autodiff) typically require that the machine learning models be differentiable.
We show that leveraging this redesigned Jacobian in the form of a differentiable "layer" in predictive models leads to improved performance in diverse applications.
arXiv Detail & Related papers (2021-10-04T16:04:46Z) - Model-agnostic multi-objective approach for the evolutionary discovery
of mathematical models [55.41644538483948]
In modern data science, it is more interesting to understand the properties of the model, which parts could be replaced to obtain better results.
We use multi-objective evolutionary optimization for composite data-driven model learning to obtain the algorithm's desired properties.
arXiv Detail & Related papers (2021-07-07T11:17:09Z) - High-dimensional Functional Graphical Model Structure Learning via
Neighborhood Selection Approach [15.334392442475115]
We propose a neighborhood selection approach to estimate the structure of functional graphical models.
We thus circumvent the need for a well-defined precision operator that may not exist when the functions are infinite dimensional.
arXiv Detail & Related papers (2021-05-06T07:38:50Z) - Learning outside the Black-Box: The pursuit of interpretable models [78.32475359554395]
This paper proposes an algorithm that produces a continuous global interpretation of any given continuous black-box function.
Our interpretation represents a leap forward from the previous state of the art.
arXiv Detail & Related papers (2020-11-17T12:39:44Z) - Interpretable Machine Learning with an Ensemble of Gradient Boosting
Machines [5.482532589225552]
Method is based on using an ensemble of gradient boosting machines (GBMs)
A lot of numerical experiments with an algorithm implementing the proposed method on synthetic and real datasets demonstrate its efficiency and properties for local and global interpretation.
arXiv Detail & Related papers (2020-10-14T20:18:40Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z) - The data-driven physical-based equations discovery using evolutionary
approach [77.34726150561087]
We describe the algorithm for the mathematical equations discovery from the given observations data.
The algorithm combines genetic programming with the sparse regression.
It could be used for governing analytical equation discovery as well as for partial differential equations (PDE) discovery.
arXiv Detail & Related papers (2020-04-03T17:21:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.