Discovering the Rationale of Decisions: Experiments on Aligning Learning
and Reasoning
- URL: http://arxiv.org/abs/2105.06758v1
- Date: Fri, 14 May 2021 10:37:03 GMT
- Title: Discovering the Rationale of Decisions: Experiments on Aligning Learning
and Reasoning
- Authors: Cor Steging, Silja Renooij, Bart Verheij
- Abstract summary: We introduce a knowledge-driven method for model-agnostic rationale evaluation using dedicated test cases.
We show that our method allows us to analyze the rationale of black-box machine learning systems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In AI and law, systems that are designed for decision support should be
explainable when pursuing justice. In order for these systems to be fair and
responsible, they should make correct decisions and make them using a sound and
transparent rationale. In this paper, we introduce a knowledge-driven method
for model-agnostic rationale evaluation using dedicated test cases, similar to
unit-testing in professional software development. We apply this new method in
a set of machine learning experiments aimed at extracting known knowledge
structures from artificial datasets from fictional and non-fictional legal
settings. We show that our method allows us to analyze the rationale of
black-box machine learning systems by assessing which rationale elements are
learned or not. Furthermore, we show that the rationale can be adjusted using
tailor-made training data based on the results of the rationale evaluation.
Related papers
- Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Modelling Assessment Rubrics through Bayesian Networks: a Pragmatic
Approach [59.77710485234197]
This paper presents an approach to deriving a learner model directly from an assessment rubric.
We illustrate how the approach can be applied to automatize the human assessment of an activity developed for testing computational thinking skills.
arXiv Detail & Related papers (2022-09-07T10:09:12Z) - Explainable Predictive Process Monitoring: A User Evaluation [62.41400549499849]
Explainability is motivated by the lack of transparency of black-box Machine Learning approaches.
We carry on a user evaluation on explanation approaches for Predictive Process Monitoring.
arXiv Detail & Related papers (2022-02-15T22:24:21Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z) - Decision Rule Elicitation for Domain Adaptation [93.02675868486932]
Human-in-the-loop machine learning is widely used in artificial intelligence (AI) to elicit labels from experts.
In this work, we allow experts to additionally produce decision rules describing their decision-making.
We show that decision rule elicitation improves domain adaptation of the algorithm and helps to propagate expert's knowledge to the AI model.
arXiv Detail & Related papers (2021-02-23T08:07:22Z) - Detecting discriminatory risk through data annotation based on Bayesian
inferences [5.017973966200985]
We propose a method of data annotation that aims to warn about the risk of discriminatory results of a given data set.
We empirically test our system on three datasets commonly accessed by the machine learning community.
arXiv Detail & Related papers (2021-01-27T12:43:42Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z) - How fair can we go in machine learning? Assessing the boundaries of
fairness in decision trees [0.12891210250935145]
We present the first methodology that allows to explore the statistical limits of bias mitigation interventions.
We focus our study on decision tree classifiers since they are widely accepted in machine learning.
We conclude experimentally that our method can optimize decision tree models by being fairer with a small cost of the classification error.
arXiv Detail & Related papers (2020-06-22T16:28:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.