Human-centred explanation of rule-based decision-making systems in the
legal domain
- URL: http://arxiv.org/abs/2310.16704v1
- Date: Wed, 25 Oct 2023 15:20:05 GMT
- Title: Human-centred explanation of rule-based decision-making systems in the
legal domain
- Authors: Suzan Zuurmond, AnneMarie Borg, Matthijs van Kempen and Remi Wieten
- Abstract summary: We propose a human-centred explanation method for rule-based automated decision-making systems in the legal domain.
Firstly, we establish a conceptual framework for developing explanation methods.
Secondly, we propose an explanation method that uses a graph database to enable question-driven explanations.
- Score: 0.3686808512438362
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a human-centred explanation method for rule-based automated
decision-making systems in the legal domain. Firstly, we establish a conceptual
framework for developing explanation methods, representing its key internal
components (content, communication and adaptation) and external dependencies
(decision-making system, human recipient and domain). Secondly, we propose an
explanation method that uses a graph database to enable question-driven
explanations and multimedia display. This way, we can tailor the explanation to
the user. Finally, we show how our conceptual framework is applicable to a
real-world scenario at the Dutch Tax and Customs Administration and implement
our explanation method for this scenario.
Related papers
- Explainability and justification of automatic-decision making: A conceptual framework and a practical application [0.0]
Article argues that a crucial condition for the acceptability of algorithmic decision-making systems is that decisions must be justified in the eyes of their recipients.<n>We make a clear distinction between explanation and justification.<n>We propose a conceptual framework of explanations and justifications, based on Habermas's theory of communicative action and Perelman's New Rhetoric theory of law.
arXiv Detail & Related papers (2026-03-02T17:00:12Z) - Exploring Plan Space through Conversation: An Agentic Framework for LLM-Mediated Explanations in Planning [10.679298682391817]
We present a multi-agent Large Language Model architecture that is agnostic to the explanation framework and enables user- and context-dependent interactive explanations.<n>We also describe an instantiation of this framework for goal-conflict explanations, which we use to conduct a user study comparing the LLM-powered interaction with a baseline template-based explanation interface.
arXiv Detail & Related papers (2026-03-02T16:58:18Z) - Explainability-Driven Quality Assessment for Rule-Based Systems [0.7303392100830282]
This paper introduces an explanation framework designed to enhance the quality of rules in knowledge-based reasoning systems.
It generates explanations of rule inferences and leverages human interpretation to refine rules.
Its practicality is demonstrated through a use case in finance.
arXiv Detail & Related papers (2025-02-03T11:26:09Z) - A Note on an Inferentialist Approach to Resource Semantics [48.65926948745294]
'Inferentialism' is the view that meaning is given in terms of inferential behaviour.
This paper shows how 'inferentialism' enables a versatile and expressive framework for resource semantics.
arXiv Detail & Related papers (2024-05-10T14:13:21Z) - Improving Intervention Efficacy via Concept Realignment in Concept Bottleneck Models [57.86303579812877]
Concept Bottleneck Models (CBMs) ground image classification on human-understandable concepts to allow for interpretable model decisions.
Existing approaches often require numerous human interventions per image to achieve strong performances.
We introduce a trainable concept realignment intervention module, which leverages concept relations to realign concept assignments post-intervention.
arXiv Detail & Related papers (2024-05-02T17:59:01Z) - Combining Embedding-Based and Semantic-Based Models for Post-hoc
Explanations in Recommender Systems [0.0]
This paper presents an approach that combines embedding-based and semantic-based models to generate post-hoc explanations in recommender systems.
The framework we defined aims at producing meaningful and easy-to-understand explanations, enhancing user trust and satisfaction, and potentially promoting the adoption of recommender systems across the e-commerce sector.
arXiv Detail & Related papers (2024-01-09T10:24:46Z) - A Taxonomy of Decentralized Identifier Methods for Practitioners [50.76687001060655]
A core part of the new identity management paradigm of Self-Sovereign Identity (SSI) is the W3C Decentralized Identifiers (DIDs) standard.
We propose a taxonomy of DID methods with the goal to empower practitioners to make informed decisions when selecting DID methods.
arXiv Detail & Related papers (2023-10-18T13:01:40Z) - Explainable Data-Driven Optimization: From Context to Decision and Back
Again [76.84947521482631]
Data-driven optimization uses contextual information and machine learning algorithms to find solutions to decision problems with uncertain parameters.
We introduce a counterfactual explanation methodology tailored to explain solutions to data-driven problems.
We demonstrate our approach by explaining key problems in operations management such as inventory management and routing.
arXiv Detail & Related papers (2023-01-24T15:25:16Z) - A Methodology and Software Architecture to Support
Explainability-by-Design [0.0]
This paper describes Explainability-by-Design, a holistic methodology characterised by proactive measures to include explanation capability in the design of decision-making systems.
The methodology consists of three phases: (A) Explanation Requirement Analysis, (B) Explanation Technical Design, and (C) Explanation Validation.
It was shown that the approach is tractable in terms of development time, which can be as low as two hours per sentence.
arXiv Detail & Related papers (2022-06-13T15:34:29Z) - Reviewable Automated Decision-Making: A Framework for Accountable
Algorithmic Systems [1.7403133838762448]
This paper introduces reviewability as a framework for improving the accountability of automated and algorithmic decision-making (ADM)
We draw on an understanding of ADM as a socio-technical process involving both human and technical elements, beginning before a decision is made and extending beyond the decision itself.
We argue that a reviewability framework, drawing on administrative law's approach to reviewing human decision-making, offers a practical way forward towards more a more holistic and legally-relevant form of accountability for ADM.
arXiv Detail & Related papers (2021-01-26T18:15:34Z) - Sequential Explanations with Mental Model-Based Policies [20.64968620536829]
We apply a reinforcement learning framework to provide explanations based on the explainee's mental model.
We conduct novel online human experiments where explanations are selected and presented to participants.
Our results suggest that mental model-based policies may increase interpretability over multiple sequential explanations.
arXiv Detail & Related papers (2020-07-17T14:43:46Z) - Explainable Deep Classification Models for Domain Generalization [94.43131722655617]
Explanations are defined as regions of visual evidence upon which a deep classification network makes a decision.
Our training strategy enforces a periodic saliency-based feedback to encourage the model to focus on the image regions that directly correspond to the ground-truth object.
arXiv Detail & Related papers (2020-03-13T22:22:15Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - Algorithmic Recourse: from Counterfactual Explanations to Interventions [16.9979815165902]
We argue that counterfactual explanations inform an individual where they need to get to, but not how to get there.
Instead, we propose a shift of paradigm from recourse via nearest counterfactual explanations to recourse through minimal interventions.
arXiv Detail & Related papers (2020-02-14T22:49:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.