Applying Genetic Programming to Improve Interpretability in Machine
Learning Models
- URL: http://arxiv.org/abs/2005.09512v1
- Date: Mon, 18 May 2020 16:09:49 GMT
- Title: Applying Genetic Programming to Improve Interpretability in Machine
Learning Models
- Authors: Leonardo Augusto Ferreira and Frederico Gadelha Guimar\~aes and
Rodrigo Silva
- Abstract summary: We propose a Genetic Programming (GP) based approach, named Genetic Programming Explainer (GPX)
The method generates a noise set located in the neighborhood of the point of interest, whose prediction should be explained, and fits a local explanation model for the analyzed sample.
Our results indicate that the GPX is able to produce more accurate understanding of complex models than the state of the art.
- Score: 0.3908287552267639
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explainable Artificial Intelligence (or xAI) has become an important research
topic in the fields of Machine Learning and Deep Learning. In this paper, we
propose a Genetic Programming (GP) based approach, named Genetic Programming
Explainer (GPX), to the problem of explaining decisions computed by AI systems.
The method generates a noise set located in the neighborhood of the point of
interest, whose prediction should be explained, and fits a local explanation
model for the analyzed sample. The tree structure generated by GPX provides a
comprehensible analytical, possibly non-linear, symbolic expression which
reflects the local behavior of the complex model. We considered three machine
learning techniques that can be recognized as complex black-box models: Random
Forest, Deep Neural Network and Support Vector Machine in twenty data sets for
regression and classifications problems. Our results indicate that the GPX is
able to produce more accurate understanding of complex models than the state of
the art. The results validate the proposed approach as a novel way to deploy GP
to improve interpretability.
Related papers
- Explaining Text Similarity in Transformer Models [52.571158418102584]
Recent advances in explainable AI have made it possible to mitigate limitations by leveraging improved explanations for Transformers.
We use BiLRP, an extension developed for computing second-order explanations in bilinear similarity models, to investigate which feature interactions drive similarity in NLP models.
Our findings contribute to a deeper understanding of different semantic similarity tasks and models, highlighting how novel explainable AI methods enable in-depth analyses and corpus-level insights.
arXiv Detail & Related papers (2024-05-10T17:11:31Z) - Explaining Genetic Programming Trees using Large Language Models [2.909922147268382]
Genetic programming (GP) has the potential to generate explainable results, especially when used for dimensionality reduction.
In this research, we investigate the potential of leveraging eXplainable AI (XAI) and large language models (LLMs) to improve the interpretability of GP-based non-linear dimensionality reduction.
arXiv Detail & Related papers (2024-03-06T01:38:42Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Topological structure of complex predictions [15.207535648404765]
Complex prediction models such as deep learning are the output from fitting machine learning, neural networks, or AI models to a set of training data.
We use topological data analysis to transform these complex prediction models into pictures representing a topological view.
The methods scale up to large datasets across different domains and enable us to detect labeling errors in training data, understand generalization in image classification, and inspect predictions of likely pathogenic mutations in the BRCA1 gene.
arXiv Detail & Related papers (2022-07-28T19:28:05Z) - Gradient-based explanations for Gaussian Process regression and
classification models [0.0]
Gaussian Processes (GPs) have proven themselves as a reliable and effective method in probabilistic Machine Learning.
Thanks to recent and current advances, modeling complex data with GPs is becoming more and more feasible.
We see an increasing interest in so-called explainable approaches - methods that aim to make a Machine Learning model's decision process transparent to humans.
arXiv Detail & Related papers (2022-05-25T14:11:00Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - Less is More: A Call to Focus on Simpler Models in Genetic Programming
for Interpretable Machine Learning [1.0323063834827415]
Interpretability can be critical for the safe and responsible use of machine learning models in high-stakes applications.
We argue that research in GP for IML needs to focus on searching in the space of low-complexity models.
arXiv Detail & Related papers (2022-04-05T08:28:07Z) - Tree-based local explanations of machine learning model predictions,
AraucanaXAI [2.9660372210786563]
A tradeoff between performance and intelligibility is often to be faced, especially in high-stakes applications like medicine.
We propose a novel methodological approach for generating explanations of the predictions of a generic ML model.
arXiv Detail & Related papers (2021-10-15T17:39:19Z) - Incremental Ensemble Gaussian Processes [53.3291389385672]
We propose an incremental ensemble (IE-) GP framework, where an EGP meta-learner employs an it ensemble of GP learners, each having a unique kernel belonging to a prescribed kernel dictionary.
With each GP expert leveraging the random feature-based approximation to perform online prediction and model update with it scalability, the EGP meta-learner capitalizes on data-adaptive weights to synthesize the per-expert predictions.
The novel IE-GP is generalized to accommodate time-varying functions by modeling structured dynamics at the EGP meta-learner and within each GP learner.
arXiv Detail & Related papers (2021-10-13T15:11:25Z) - Learning outside the Black-Box: The pursuit of interpretable models [78.32475359554395]
This paper proposes an algorithm that produces a continuous global interpretation of any given continuous black-box function.
Our interpretation represents a leap forward from the previous state of the art.
arXiv Detail & Related papers (2020-11-17T12:39:44Z) - The data-driven physical-based equations discovery using evolutionary
approach [77.34726150561087]
We describe the algorithm for the mathematical equations discovery from the given observations data.
The algorithm combines genetic programming with the sparse regression.
It could be used for governing analytical equation discovery as well as for partial differential equations (PDE) discovery.
arXiv Detail & Related papers (2020-04-03T17:21:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.