A Concept and Argumentation based Interpretable Model in High Risk
Domains
- URL: http://arxiv.org/abs/2208.08149v1
- Date: Wed, 17 Aug 2022 08:29:02 GMT
- Title: A Concept and Argumentation based Interpretable Model in High Risk
Domains
- Authors: Haixiao Chi, Dawei Wang, Gaojie Cui, Feng Mao, Beishui Liao
- Abstract summary: Interpretability has become an essential topic for artificial intelligence in high-risk domains such as healthcare, bank and security.
We propose a concept and argumentation based model (CAM) that includes a novel concept mining method to obtain human understandable concepts.
CAM provides decisions that are based on human-level knowledge and the reasoning process is intrinsically interpretable.
- Score: 9.209499864585688
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Interpretability has become an essential topic for artificial intelligence in
some high-risk domains such as healthcare, bank and security. For commonly-used
tabular data, traditional methods trained end-to-end machine learning models
with numerical and categorical data only, and did not leverage human
understandable knowledge such as data descriptions. Yet mining human-level
knowledge from tabular data and using it for prediction remain a challenge.
Therefore, we propose a concept and argumentation based model (CAM) that
includes the following two components: a novel concept mining method to obtain
human understandable concepts and their relations from both descriptions of
features and the underlying data, and a quantitative argumentation-based method
to do knowledge representation and reasoning. As a result of it, CAM provides
decisions that are based on human-level knowledge and the reasoning process is
intrinsically interpretable. Finally, to visualize the purposed interpretable
model, we provide a dialogical explanation that contain dominated reasoning
path within CAM. Experimental results on both open source benchmark dataset and
real-word business dataset show that (1) CAM is transparent and interpretable,
and the knowledge inside the CAM is coherent with human understanding; (2) Our
interpretable approach can reach competitive results comparing with other
state-of-art models.
Related papers
- Concept Induction using LLMs: a user experiment for assessment [1.1982127665424676]
This study explores the potential of a Large Language Model (LLM) to generate high-level concepts that are meaningful as explanations for humans.
We compare the concepts generated by the LLM with two other methods: concepts generated by humans and the ECII concept induction system.
Our findings indicate that while human-generated explanations remain superior, concepts derived from GPT-4 are more comprehensible to humans compared to those generated by ECII.
arXiv Detail & Related papers (2024-04-18T03:22:02Z) - Learning Interpretable Concepts: Unifying Causal Representation Learning
and Foundation Models [51.43538150982291]
We study how to learn human-interpretable concepts from data.
Weaving together ideas from both fields, we show that concepts can be provably recovered from diverse data.
arXiv Detail & Related papers (2024-02-14T15:23:59Z) - Interpreting Pretrained Language Models via Concept Bottlenecks [55.47515772358389]
Pretrained language models (PLMs) have made significant strides in various natural language processing tasks.
The lack of interpretability due to their black-box'' nature poses challenges for responsible implementation.
We propose a novel approach to interpreting PLMs by employing high-level, meaningful concepts that are easily understandable for humans.
arXiv Detail & Related papers (2023-11-08T20:41:18Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - A Study of Situational Reasoning for Traffic Understanding [63.45021731775964]
We devise three novel text-based tasks for situational reasoning in the traffic domain.
We adopt four knowledge-enhanced methods that have shown generalization capability across language reasoning tasks in prior work.
We provide in-depth analyses of model performance on data partitions and examine model predictions categorically.
arXiv Detail & Related papers (2023-06-05T01:01:12Z) - Concept-Based Explanations for Tabular Data [0.0]
We propose a concept-based explainability for Deep Neural Networks (DNNs)
We show the validity of our method in generating interpretability results that match the human-level intuitions.
We also propose a notion of fairness based on TCAV that quantifies what layer of DNN has learned representations that lead to biased predictions.
arXiv Detail & Related papers (2022-09-13T02:19:29Z) - Algebraic Learning: Towards Interpretable Information Modeling [0.0]
This thesis addresses the issue of interpretability in general information modeling and endeavors to ease the problem from two scopes.
Firstly, a problem-oriented perspective is applied to incorporate knowledge into modeling practice, where interesting mathematical properties emerge naturally.
Secondly, given a trained model, various methods could be applied to extract further insights about the underlying system.
arXiv Detail & Related papers (2022-03-13T15:53:39Z) - A Comparative Approach to Explainable Artificial Intelligence Methods in
Application to High-Dimensional Electronic Health Records: Examining the
Usability of XAI [0.0]
XAI aims to produce a demonstrative factor of trust, which for human subjects is achieved through communicative means.
The ideology behind trusting a machine to tend towards the livelihood of a human poses an ethical conundrum.
XAI methods produce visualization of the feature contribution towards a given models output on both a local and global level.
arXiv Detail & Related papers (2021-03-08T18:15:52Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.