Diverse Explanations from Data-driven and Domain-driven Perspectives for
Machine Learning Models
- URL: http://arxiv.org/abs/2402.00347v1
- Date: Thu, 1 Feb 2024 05:28:28 GMT
- Title: Diverse Explanations from Data-driven and Domain-driven Perspectives for
Machine Learning Models
- Authors: Sichao Li and Amanda Barnard
- Abstract summary: Explanations of machine learning models are important, especially in scientific areas such as chemistry, biology, and physics.
This paper calls attention to inconsistencies due to accurate yet misleading machine learning models and various stakeholders with specific needs, wants, or aims.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Explanations of machine learning models are important, especially in
scientific areas such as chemistry, biology, and physics, where they guide
future laboratory experiments and resource requirements. These explanations can
be derived from well-trained machine learning models (data-driven perspective)
or specific domain knowledge (domain-driven perspective). However, there exist
inconsistencies between these perspectives due to accurate yet misleading
machine learning models and various stakeholders with specific needs, wants, or
aims. This paper calls attention to these inconsistencies and suggests a way to
find an accurate model with expected explanations that reinforce physical laws
and meet stakeholders' requirements from a set of equally-good models, also
known as Rashomon sets. Our goal is to foster a comprehensive understanding of
these inconsistencies and ultimately contribute to the integration of
eXplainable Artificial Intelligence (XAI) into scientific domains.
Related papers
- Physics-Inspired Interpretability Of Machine Learning Models [0.0]
The ability to explain decisions made by machine learning models remains one of the most significant hurdles towards widespread adoption of AI.
We propose a novel approach to identify relevant features of the input data, inspired by methods from the energy landscapes field.
arXiv Detail & Related papers (2023-04-05T11:35:17Z) - Synthetic Model Combination: An Instance-wise Approach to Unsupervised
Ensemble Learning [92.89846887298852]
Consider making a prediction over new test data without any opportunity to learn from a training set of labelled data.
Give access to a set of expert models and their predictions alongside some limited information about the dataset used to train them.
arXiv Detail & Related papers (2022-10-11T10:20:31Z) - Interpretable and Explainable Machine Learning for Materials Science and
Chemistry [2.2175470459999636]
We summarize applications of interpretability and explainability techniques for materials science and chemistry.
We discuss various challenges for interpretable machine learning in materials science and, more broadly, in scientific settings.
We showcase a number of exciting developments in other fields that could benefit interpretability in material science and chemistry problems.
arXiv Detail & Related papers (2021-11-01T15:40:36Z) - Model-agnostic multi-objective approach for the evolutionary discovery
of mathematical models [55.41644538483948]
In modern data science, it is more interesting to understand the properties of the model, which parts could be replaced to obtain better results.
We use multi-objective evolutionary optimization for composite data-driven model learning to obtain the algorithm's desired properties.
arXiv Detail & Related papers (2021-07-07T11:17:09Z) - Importance measures derived from random forests: characterisation and
extension [0.2741266294612776]
This thesis aims at improving the interpretability of models built by a specific family of machine learning algorithms.
Several mechanisms have been proposed to interpret these models and we aim along this thesis to improve their understanding.
arXiv Detail & Related papers (2021-06-17T13:23:57Z) - Individual Explanations in Machine Learning Models: A Survey for
Practitioners [69.02688684221265]
The use of sophisticated statistical models that influence decisions in domains of high societal relevance is on the rise.
Many governments, institutions, and companies are reluctant to their adoption as their output is often difficult to explain in human-interpretable ways.
Recently, the academic literature has proposed a substantial amount of methods for providing interpretable explanations to machine learning models.
arXiv Detail & Related papers (2021-04-09T01:46:34Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Physics-Integrated Variational Autoencoders for Robust and Interpretable
Generative Modeling [86.9726984929758]
We focus on the integration of incomplete physics models into deep generative models.
We propose a VAE architecture in which a part of the latent space is grounded by physics.
We demonstrate generative performance improvements over a set of synthetic and real-world datasets.
arXiv Detail & Related papers (2021-02-25T20:28:52Z) - Knowledge as Invariance -- History and Perspectives of
Knowledge-augmented Machine Learning [69.99522650448213]
Research in machine learning is at a turning point.
Research interests are shifting away from increasing the performance of highly parameterized models to exceedingly specific tasks.
This white paper provides an introduction and discussion of this emerging field in machine learning research.
arXiv Detail & Related papers (2020-12-21T15:07:19Z) - Differential Replication in Machine Learning [0.90238471756546]
We propose a solution based on reusing the knowledge acquired by the already deployed machine learning models.
This is the idea behind differential replication of machine learning models.
arXiv Detail & Related papers (2020-07-15T20:26:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.