Explainability and justification of automatic-decision making: A conceptual framework and a practical application
- URL: http://arxiv.org/abs/2603.02073v1
- Date: Mon, 02 Mar 2026 17:00:12 GMT
- Title: Explainability and justification of automatic-decision making: A conceptual framework and a practical application
- Authors: Sarra Tajouri, Yves Meinard, Alexis Tsoukiàs, Thierry Kirat,
- Abstract summary: Article argues that a crucial condition for the acceptability of algorithmic decision-making systems is that decisions must be justified in the eyes of their recipients.<n>We make a clear distinction between explanation and justification.<n>We propose a conceptual framework of explanations and justifications, based on Habermas's theory of communicative action and Perelman's New Rhetoric theory of law.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Explainability of algorithmic decision-making systems is both a regulatory objective and an area of intense research. The article argues that a crucial condition for the acceptability of algorithmic decision-making systems is that decisions must be justified in the eyes of their recipients. We make a clear distinction between explanation and justification. Explanations describe how a decision was made, while justifications give reasons that aim to make the decision acceptable. We propose a conceptual framework of explanations and justifications, based on Habermas's theory of communicative action and Perelman's New Rhetoric theory of law. This framework helps to analyze how different forms of explanation can support or fail to support justification. We illustrate our approach with a case study on university admissions in France.
Related papers
- Explaining Non-monotonic Normative Reasoning using Argumentation Theory with Deontic Logic [7.162465547358201]
This paper explores how to provide designers with effective explanations for their legally relevant design decisions.
We extend the previous system for providing explanations by specifying norms and the key legal or ethical principles for justifying actions in normative contexts.
Considering that first-order logic has strong expressive power, in the current paper we adopt a first-order deontic logic system with deontic operators and preferences.
arXiv Detail & Related papers (2024-09-18T08:03:29Z) - Conceptual and Unbiased Reasoning in Language Models [98.90677711523645]
We propose a novel conceptualization framework that forces models to perform conceptual reasoning on abstract questions.
We show that existing large language models fall short on conceptual reasoning, dropping 9% to 28% on various benchmarks.
We then discuss how models can improve since high-level abstract reasoning is key to unbiased and generalizable decision-making.
arXiv Detail & Related papers (2024-03-30T00:53:53Z) - Human-centred explanation of rule-based decision-making systems in the
legal domain [0.3686808512438362]
We propose a human-centred explanation method for rule-based automated decision-making systems in the legal domain.
Firstly, we establish a conceptual framework for developing explanation methods.
Secondly, we propose an explanation method that uses a graph database to enable question-driven explanations.
arXiv Detail & Related papers (2023-10-25T15:20:05Z) - A Unifying Framework for Learning Argumentation Semantics [47.84663434179473]
We present a novel framework, which uses an Inductive Logic Programming approach to learn the acceptability semantics for several abstract and structured argumentation frameworks in an interpretable way.<n>Our framework outperforms existing argumentation solvers, thus opening up new future research directions in the area of formal argumentation and human-machine dialogues.
arXiv Detail & Related papers (2023-10-18T20:18:05Z) - Disentangling Reasoning Capabilities from Language Models with
Compositional Reasoning Transformers [72.04044221898059]
ReasonFormer is a unified reasoning framework for mirroring the modular and compositional reasoning process of humans.
The representation module (automatic thinking) and reasoning modules (controlled thinking) are disentangled to capture different levels of cognition.
The unified reasoning framework solves multiple tasks with a single model,and is trained and inferred in an end-to-end manner.
arXiv Detail & Related papers (2022-10-20T13:39:55Z) - What is Legitimate Decision Support? [0.0]
Two concepts have structured the literature devoted to analysing this aspect of decision support: validity and legitimacy.
Despite its importance, this concept has not received the attention it deserves in the literature in decision support.
We propose a general theory of legitimacy, adapted to decision support contexts.
arXiv Detail & Related papers (2022-01-28T12:20:18Z) - Explainable Decision Making with Lean and Argumentative Explanations [11.644036228274176]
We consider two variants of decision making, where "good" decisions amount to alternatives meeting "most" goals, and (ii) meeting "most preferred" goals.
We then define, for each variant and notion of "goodness," explanations in two formats, for justifying the selection of an alternative to audiences with differing needs and competences.
arXiv Detail & Related papers (2022-01-18T01:29:02Z) - Thinking About Causation: A Causal Language with Epistemic Operators [58.720142291102135]
We extend the notion of a causal model with a representation of the state of an agent.
On the side of the object language, we add operators to express knowledge and the act of observing new information.
We provide a sound and complete axiomatization of the logic, and discuss the relation of this framework to causal team semantics.
arXiv Detail & Related papers (2020-10-30T12:16:45Z) - Towards Interpretable Reasoning over Paragraph Effects in Situation [126.65672196760345]
We focus on the task of reasoning over paragraph effects in situation, which requires a model to understand the cause and effect.
We propose a sequential approach for this task which explicitly models each step of the reasoning process with neural network modules.
In particular, five reasoning modules are designed and learned in an end-to-end manner, which leads to a more interpretable model.
arXiv Detail & Related papers (2020-10-03T04:03:52Z) - Aligning Faithful Interpretations with their Social Attribution [58.13152510843004]
We find that the requirement of model interpretations to be faithful is vague and incomplete.
We identify that the problem is a misalignment between the causal chain of decisions (causal attribution) and the attribution of human behavior to the interpretation (social attribution)
arXiv Detail & Related papers (2020-06-01T16:45:38Z) - Algorithmic Recourse: from Counterfactual Explanations to Interventions [16.9979815165902]
We argue that counterfactual explanations inform an individual where they need to get to, but not how to get there.
Instead, we propose a shift of paradigm from recourse via nearest counterfactual explanations to recourse through minimal interventions.
arXiv Detail & Related papers (2020-02-14T22:49:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.