Multi-modal multi-objective model-based genetic programming to find
multiple diverse high-quality models
- URL: http://arxiv.org/abs/2203.13347v1
- Date: Thu, 24 Mar 2022 21:35:07 GMT
- Title: Multi-modal multi-objective model-based genetic programming to find
multiple diverse high-quality models
- Authors: E.M.C. Sijben, T. Alderliesten and P.A.N. Bosman
- Abstract summary: Genetic programming (GP) is often cited as being uniquely well-suited to contribute to Explainable artificial intelligence (XAI)
In this paper, we achieve exactly this with a novel multi-modal multi-tree multi-objective GP approach that extends a modern model-based GP algorithm known as GP-GOMEA.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explainable artificial intelligence (XAI) is an important and rapidly
expanding research topic. The goal of XAI is to gain trust in a machine
learning (ML) model through clear insights into how the model arrives at its
predictions. Genetic programming (GP) is often cited as being uniquely
well-suited to contribute to XAI because of its capacity to learn (small)
symbolic models that have the potential to be interpreted. Nevertheless, like
many ML algorithms, GP typically results in a single best model. However, in
practice, the best model in terms of training error may well not be the most
suitable one as judged by a domain expert for various reasons, including
overfitting, multiple different models existing that have similar accuracy, and
unwanted errors on particular data points due to typical accuracy measures like
mean squared error. Hence, to increase chances that domain experts deem a
resulting model plausible, it becomes important to be able to explicitly search
for multiple, diverse, high-quality models that trade-off different meanings of
accuracy. In this paper, we achieve exactly this with a novel multi-modal
multi-tree multi-objective GP approach that extends a modern model-based GP
algorithm known as GP-GOMEA that is already effective at searching for small
expressions.
Related papers
- On Arbitrary Predictions from Equally Valid Models [49.56463611078044]
Model multiplicity refers to multiple machine learning models that admit conflicting predictions for the same patient.<n>We show that even small ensembles can mitigate/eliminate predictive multiplicity in practice.
arXiv Detail & Related papers (2025-07-25T16:15:59Z) - Domain-specific or Uncertainty-aware models: Does it really make a difference for biomedical text classification? [4.741884506444161]
We discuss how domain specificity and uncertainty awareness can be combined to produce reasonable estimates of a model's own uncertainty.
We find that domain specificity and uncertainty awareness can often be successfully combined, but the exact task at hand weighs in much more strongly.
arXiv Detail & Related papers (2024-07-17T14:52:46Z) - On Least Square Estimation in Softmax Gating Mixture of Experts [78.3687645289918]
We investigate the performance of the least squares estimators (LSE) under a deterministic MoE model.
We establish a condition called strong identifiability to characterize the convergence behavior of various types of expert functions.
Our findings have important practical implications for expert selection.
arXiv Detail & Related papers (2024-02-05T12:31:18Z) - Multifamily Malware Models [5.414308305392762]
We conduct experiments based on byte $n$-gram features to quantify the relationship between the generality of the training dataset and the accuracy of the corresponding machine learning models.
We find that neighborhood-based algorithms generalize surprisingly well, far outperforming the other machine learning techniques considered.
arXiv Detail & Related papers (2022-06-27T13:06:31Z) - DIME: Fine-grained Interpretations of Multimodal Models via Disentangled
Local Explanations [119.1953397679783]
We focus on advancing the state-of-the-art in interpreting multimodal models.
Our proposed approach, DIME, enables accurate and fine-grained analysis of multimodal models.
arXiv Detail & Related papers (2022-03-03T20:52:47Z) - Tree-based local explanations of machine learning model predictions,
AraucanaXAI [2.9660372210786563]
A tradeoff between performance and intelligibility is often to be faced, especially in high-stakes applications like medicine.
We propose a novel methodological approach for generating explanations of the predictions of a generic ML model.
arXiv Detail & Related papers (2021-10-15T17:39:19Z) - Model-agnostic multi-objective approach for the evolutionary discovery
of mathematical models [55.41644538483948]
In modern data science, it is more interesting to understand the properties of the model, which parts could be replaced to obtain better results.
We use multi-objective evolutionary optimization for composite data-driven model learning to obtain the algorithm's desired properties.
arXiv Detail & Related papers (2021-07-07T11:17:09Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z) - Plausible Counterfactuals: Auditing Deep Learning Classifiers with
Realistic Adversarial Examples [84.8370546614042]
Black-box nature of Deep Learning models has posed unanswered questions about what they learn from data.
Generative Adversarial Network (GAN) and multi-objectives are used to furnish a plausible attack to the audited model.
Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
arXiv Detail & Related papers (2020-03-25T11:08:56Z) - Learning Gaussian Graphical Models via Multiplicative Weights [54.252053139374205]
We adapt an algorithm of Klivans and Meka based on the method of multiplicative weight updates.
The algorithm enjoys a sample complexity bound that is qualitatively similar to others in the literature.
It has a low runtime $O(mp2)$ in the case of $m$ samples and $p$ nodes, and can trivially be implemented in an online manner.
arXiv Detail & Related papers (2020-02-20T10:50:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.