Designing Explainable Predictive Machine Learning Artifacts: Methodology
and Practical Demonstration
- URL: http://arxiv.org/abs/2306.11771v1
- Date: Tue, 20 Jun 2023 15:11:26 GMT
- Title: Designing Explainable Predictive Machine Learning Artifacts: Methodology
and Practical Demonstration
- Authors: Giacomo Welsch, Peter Kowalczyk
- Abstract summary: Decision-makers from companies across various industries are still largely reluctant to employ applications based on modern machine learning algorithms.
We ascribe this issue to the widely held view on advanced machine learning algorithms as "black boxes"
We develop a methodology which unifies methodological knowledge from design science research and predictive analytics with state-of-the-art approaches to explainable artificial intelligence.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Prediction-oriented machine learning is becoming increasingly valuable to
organizations, as it may drive applications in crucial business areas. However,
decision-makers from companies across various industries are still largely
reluctant to employ applications based on modern machine learning algorithms.
We ascribe this issue to the widely held view on advanced machine learning
algorithms as "black boxes" whose complexity does not allow for uncovering the
factors that drive the output of a corresponding system. To contribute to
overcome this adoption barrier, we argue that research in information systems
should devote more attention to the design of prototypical prediction-oriented
machine learning applications (i.e., artifacts) whose predictions can be
explained to human decision-makers. However, despite the recent emergence of a
variety of tools that facilitate the development of such artifacts, there has
so far been little research on their development. We attribute this research
gap to the lack of methodological guidance to support the creation of these
artifacts. For this reason, we develop a methodology which unifies
methodological knowledge from design science research and predictive analytics
with state-of-the-art approaches to explainable artificial intelligence.
Moreover, we showcase the methodology using the example of price prediction in
the sharing economy (i.e., on Airbnb).
Related papers
- Ontology Embedding: A Survey of Methods, Applications and Resources [54.3453925775069]
Ontologies are widely used for representing domain knowledge and meta data.
One straightforward solution is to integrate statistical analysis and machine learning.
Numerous papers have been published on embedding, but a lack of systematic reviews hinders researchers from gaining a comprehensive understanding of this field.
arXiv Detail & Related papers (2024-06-16T14:49:19Z) - Interpretable and Explainable Machine Learning Methods for Predictive
Process Monitoring: A Systematic Literature Review [1.3812010983144802]
This paper presents a systematic review on the explainability and interpretability of machine learning (ML) models within the context of predictive process mining.
We provide a comprehensive overview of the current methodologies and their applications across various application domains.
Our findings aim to equip researchers and practitioners with a deeper understanding of how to develop and implement more trustworthy, transparent, and effective intelligent systems for process analytics.
arXiv Detail & Related papers (2023-12-29T12:43:43Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Interpretable Machine Learning for Discovery: Statistical Challenges \&
Opportunities [1.2891210250935146]
We discuss and review the field of interpretable machine learning.
We outline the types of discoveries that can be made using Interpretable Machine Learning.
We focus on the grand challenge of how to validate these discoveries in a data-driven manner.
arXiv Detail & Related papers (2023-08-02T23:57:31Z) - Machine Unlearning: A Survey [56.79152190680552]
A special need has arisen where, due to privacy, usability, and/or the right to be forgotten, information about some specific samples needs to be removed from a model, called machine unlearning.
This emerging technology has drawn significant interest from both academics and industry due to its innovation and practicality.
No study has analyzed this complex topic or compared the feasibility of existing unlearning solutions in different kinds of scenarios.
The survey concludes by highlighting some of the outstanding issues with unlearning techniques, along with some feasible directions for new research opportunities.
arXiv Detail & Related papers (2023-06-06T10:18:36Z) - Explainable AI Enabled Inspection of Business Process Prediction Models [2.5229940062544496]
We present an approach that allows us to use model explanations to investigate certain reasoning applied by machine learned predictions.
A novel contribution of our approach is the proposal of model inspection that leverages both the explanations generated by interpretable machine learning mechanisms and the contextual or domain knowledge extracted from event logs that record historical process execution.
arXiv Detail & Related papers (2021-07-16T06:51:18Z) - Importance measures derived from random forests: characterisation and
extension [0.2741266294612776]
This thesis aims at improving the interpretability of models built by a specific family of machine learning algorithms.
Several mechanisms have been proposed to interpret these models and we aim along this thesis to improve their understanding.
arXiv Detail & Related papers (2021-06-17T13:23:57Z) - Individual Explanations in Machine Learning Models: A Survey for
Practitioners [69.02688684221265]
The use of sophisticated statistical models that influence decisions in domains of high societal relevance is on the rise.
Many governments, institutions, and companies are reluctant to their adoption as their output is often difficult to explain in human-interpretable ways.
Recently, the academic literature has proposed a substantial amount of methods for providing interpretable explanations to machine learning models.
arXiv Detail & Related papers (2021-04-09T01:46:34Z) - Knowledge as Invariance -- History and Perspectives of
Knowledge-augmented Machine Learning [69.99522650448213]
Research in machine learning is at a turning point.
Research interests are shifting away from increasing the performance of highly parameterized models to exceedingly specific tasks.
This white paper provides an introduction and discussion of this emerging field in machine learning research.
arXiv Detail & Related papers (2020-12-21T15:07:19Z) - Counterfactual Explanations for Machine Learning: A Review [5.908471365011942]
We review and categorize research on counterfactual explanations in machine learning.
Modern approaches to counterfactual explainability in machine learning draw connections to the established legal doctrine in many countries.
arXiv Detail & Related papers (2020-10-20T20:08:42Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.