Learning Classifier Systems for Self-Explaining Socio-Technical-Systems
- URL: http://arxiv.org/abs/2207.02300v1
- Date: Fri, 1 Jul 2022 08:13:34 GMT
- Title: Learning Classifier Systems for Self-Explaining Socio-Technical-Systems
- Authors: Michael Heider, Helena Stegherr, Richard Nordsieck, J\"org H\"ahner
- Abstract summary: In socio-technical settings, operators are increasingly assisted by decision support systems.
To be accepted by and engage efficiently with operators, decision support systems need to be able to provide explanations regarding the reasoning behind specific decisions.
We present a template of seven questions to assess application-specific explainability needs and demonstrate their usage in an interview-based case study for a manufacturing scenario.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In socio-technical settings, operators are increasingly assisted by decision
support systems. By employing these, important properties of socio-technical
systems such as self-adaptation and self-optimization are expected to improve
further. To be accepted by and engage efficiently with operators, decision
support systems need to be able to provide explanations regarding the reasoning
behind specific decisions. In this paper, we propose the usage of Learning
Classifier Systems, a family of rule-based machine learning methods, to
facilitate transparent decision making and highlight some techniques to improve
that. We then present a template of seven questions to assess
application-specific explainability needs and demonstrate their usage in an
interview-based case study for a manufacturing scenario. We find that the
answers received did yield useful insights for a well-designed LCS model and
requirements to have stakeholders actively engage with an intelligent agent.
Related papers
- K-ESConv: Knowledge Injection for Emotional Support Dialogue Systems via
Prompt Learning [83.19215082550163]
We propose K-ESConv, a novel prompt learning based knowledge injection method for emotional support dialogue system.
We evaluate our model on an emotional support dataset ESConv, where the model retrieves and incorporates knowledge from external professional emotional Q&A forum.
arXiv Detail & Related papers (2023-12-16T08:10:10Z) - Personalized Decision Supports based on Theory of Mind Modeling and
Explainable Reinforcement Learning [0.9071985476473737]
We propose a novel personalized decision support system that combines Theory of Mind (ToM) modeling and explainable Reinforcement Learning (XRL)
Our proposed system generates accurate and personalized interventions that are easily interpretable by end-users.
arXiv Detail & Related papers (2023-12-13T00:37:17Z) - Recommender Systems in the Era of Large Language Models (LLMs) [62.0129013439038]
Large Language Models (LLMs) have revolutionized the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI)
We conduct a comprehensive review of LLM-empowered recommender systems from various aspects including Pre-training, Fine-tuning, and Prompting.
arXiv Detail & Related papers (2023-07-05T06:03:40Z) - Assisting Human Decisions in Document Matching [52.79491990823573]
We devise a proxy matching task that allows us to evaluate which kinds of assistive information improve decision makers' performance.
We find that providing black-box model explanations reduces users' accuracy on the matching task.
On the other hand, custom methods that are designed to closely attend to some task-specific desiderata are found to be effective in improving user performance.
arXiv Detail & Related papers (2023-02-16T17:45:20Z) - An Interactive Explanatory AI System for Industrial Quality Control [0.8889304968879161]
We aim to extend the defect detection task towards an interactive human-in-the-loop approach.
We propose an approach for an interactive support system for classifications in an industrial quality control setting.
arXiv Detail & Related papers (2022-03-17T09:04:46Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Explainable Predictive Process Monitoring: A User Evaluation [62.41400549499849]
Explainability is motivated by the lack of transparency of black-box Machine Learning approaches.
We carry on a user evaluation on explanation approaches for Predictive Process Monitoring.
arXiv Detail & Related papers (2022-02-15T22:24:21Z) - How to choose an Explainability Method? Towards a Methodical
Implementation of XAI in Practice [3.974102831754831]
We argue there is a need for a methodology to bridge the gap between stakeholder needs and explanation methods.
We present our ongoing work on creating this methodology to help data scientists in the process of providing explainability to stakeholders.
arXiv Detail & Related papers (2021-07-09T13:22:58Z) - Design principles for a hybrid intelligence decision support system for
business model validation [4.127347156839169]
This paper develops design principles for a Hybrid Intelligence decision support system (HI-DSS)
We follow a design science research approach to design a prototype artifact and a set of design principles.
Our study provides prescriptive knowledge for HI-DSS and contributes to previous work on decision support for business models, the applications of complementary strengths of humans and machines for making decisions, and support systems for extremely uncertain decision-making problems.
arXiv Detail & Related papers (2021-05-07T16:13:36Z) - Decision Rule Elicitation for Domain Adaptation [93.02675868486932]
Human-in-the-loop machine learning is widely used in artificial intelligence (AI) to elicit labels from experts.
In this work, we allow experts to additionally produce decision rules describing their decision-making.
We show that decision rule elicitation improves domain adaptation of the algorithm and helps to propagate expert's knowledge to the AI model.
arXiv Detail & Related papers (2021-02-23T08:07:22Z) - A systematic review and taxonomy of explanations in decision support and
recommender systems [13.224071661974596]
We systematically review the literature on explanations in advice-giving systems.
We derive a novel comprehensive taxonomy of aspects to be considered when designing explanation facilities.
arXiv Detail & Related papers (2020-06-15T18:19:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.