Argumentation-Based Explainability for Legal AI: Comparative and Regulatory Perspectives
- URL: http://arxiv.org/abs/2510.11079v1
- Date: Mon, 13 Oct 2025 07:19:15 GMT
- Title: Argumentation-Based Explainability for Legal AI: Comparative and Regulatory Perspectives
- Authors: Andrada Iulia Prajescu, Roberto Confalonieri,
- Abstract summary: Artificial Intelligence (AI) systems are increasingly deployed in legal contexts.<n>The so-called black box problem'' undermines legitimacy of automated decision-making.<n>XAI has proposed a variety of methods to enhance transparency.
- Score: 0.9668407688201359
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial Intelligence (AI) systems are increasingly deployed in legal contexts, where their opacity raises significant challenges for fairness, accountability, and trust. The so-called ``black box problem'' undermines the legitimacy of automated decision-making, as affected individuals often lack access to meaningful explanations. In response, the field of Explainable AI (XAI) has proposed a variety of methods to enhance transparency, ranging from example-based and rule-based techniques to hybrid and argumentation-based approaches. This paper promotes computational models of arguments and their role in providing legally relevant explanations, with particular attention to their alignment with emerging regulatory frameworks such as the EU General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AIA). We analyze the strengths and limitations of different explanation strategies, evaluate their applicability to legal reasoning, and highlight how argumentation frameworks -- by capturing the defeasible, contestable, and value-sensitive nature of law -- offer a particularly robust foundation for explainable legal AI. Finally, we identify open challenges and research directions, including bias mitigation, empirical validation in judicial settings, and compliance with evolving ethical and legal standards, arguing that computational argumentation is best positioned to meet both technical and normative requirements of transparency in the law domain.
Related papers
- LegalOne: A Family of Foundation Models for Reliable Legal Reasoning [54.57434222018289]
We present LegalOne, a family of foundational models specifically tailored for the Chinese legal domain.<n>LegalOne is developed through a comprehensive three-phase pipeline designed to master legal reasoning.<n>We publicly release the LegalOne weights and the LegalKit evaluation framework to advance the field of Legal AI.
arXiv Detail & Related papers (2026-01-31T10:18:32Z) - LRAS: Advanced Legal Reasoning with Agentic Search [48.281150948187786]
Legal Reasoning with Agentic Search (LRAS) is a framework designed to transition legal LLMs from static and parametric "closed-loop thinking" to dynamic and interactive "Active Inquiry"<n>By integrating Introspective Learning and Difficulty-aware Reinforcement Learning, LRAS enables LRMs to identify knowledge boundaries and handle legal reasoning.<n> Empirical results demonstrate that LRAS outperforms state-of-the-art baselines by 8.2-32%.
arXiv Detail & Related papers (2026-01-12T08:07:35Z) - Universal Legal Article Prediction via Tight Collaboration between Supervised Classification Model and LLM [42.11889345473452]
Legal Article Prediction (LAP) is a critical task in legal text classification.<n>We propose Uni-LAP, a universal framework for legal article prediction.
arXiv Detail & Related papers (2025-09-26T09:42:20Z) - On Verifiable Legal Reasoning: A Multi-Agent Framework with Formalized Knowledge Representations [0.0]
This paper introduces a modular multi-agent framework that decomposes legal reasoning into distinct knowledge acquisition and application stages.<n>In the first stage, specialized agents extract legal concepts and formalize rules to create verifiable intermediate representations of statutes.<n>The second stage applies this knowledge to specific cases through three steps: analyzing queries to map case facts onto the schema, performing symbolic inference to derive logically entailed conclusions, and generating final answers.
arXiv Detail & Related papers (2025-08-31T06:03:00Z) - Judicial Requirements for Generative AI in Legal Reasoning [0.0]
Large Language Models (LLMs) are being integrated into professional domains, yet their limitations in high-stakes fields like law remain poorly understood.<n>This paper defines the core capabilities that an AI system must possess to function as a reliable reasoning tool in judicial decision-making.
arXiv Detail & Related papers (2025-08-26T09:56:26Z) - GLARE: Agentic Reasoning for Legal Judgment Prediction [60.13483016810707]
Legal judgment prediction (LJP) has become increasingly important in the legal field.<n>Existing large language models (LLMs) have significant problems of insufficient reasoning due to a lack of legal knowledge.<n>We introduce GLARE, an agentic legal reasoning framework that dynamically acquires key legal knowledge by invoking different modules.
arXiv Detail & Related papers (2025-08-22T13:38:12Z) - Foundations for Risk Assessment of AI in Protecting Fundamental Rights [0.5093073566064981]
This chapter introduces a conceptual framework for qualitative risk assessment of AI.<n>It addresses the complexities of legal compliance and fundamental rights protection by itegrating definitional balancing and defeasible reasoning.
arXiv Detail & Related papers (2025-07-24T10:52:22Z) - Regulating Ai In Financial Services: Legal Frameworks And Compliance Challenges [0.0]
Article examines the evolving landscape of artificial intelligence (AI) regulation in financial services.<n>It highlights how AI-driven processes, from fraud detection to algorithmic trading, offer efficiency gains yet introduce significant risks.<n>The study compares regulatory approaches across major jurisdictions such as the European Union, United States, and United Kingdom.
arXiv Detail & Related papers (2025-03-17T14:29:09Z) - A Law Reasoning Benchmark for LLM with Tree-Organized Structures including Factum Probandum, Evidence and Experiences [76.73731245899454]
We propose a transparent law reasoning schema enriched with hierarchical factum probandum, evidence, and implicit experience.<n>Inspired by this schema, we introduce the challenging task, which takes a textual case description and outputs a hierarchical structure justifying the final decision.<n>This benchmark paves the way for transparent and accountable AI-assisted law reasoning in the Intelligent Court''
arXiv Detail & Related papers (2025-03-02T10:26:54Z) - The explanation dialogues: an expert focus study to understand requirements towards explanations within the GDPR [47.06917254695738]
We present the Explanation Dialogues, an expert focus study to uncover the expectations, reasoning, and understanding of legal experts and practitioners towards XAI.<n>The study consists of an online questionnaire and follow-up interviews, and is centered around a use-case in the credit domain.<n>We find that the presented explanations are hard to understand and lack information, and discuss issues that can arise from the different interests of the data controller and subject.
arXiv Detail & Related papers (2025-01-09T15:50:02Z) - Legal Evalutions and Challenges of Large Language Models [42.51294752406578]
We use the OPENAI o1 model as a case study to evaluate the performance of large models in applying legal provisions.
We compare current state-of-the-art LLMs, including open-source, closed-source, and legal-specific models trained specifically for the legal domain.
arXiv Detail & Related papers (2024-11-15T12:23:12Z) - LawLLM: Law Large Language Model for the US Legal System [43.13850456765944]
We introduce the Law Large Language Model (LawLLM), a multi-task model specifically designed for the US legal domain.
LawLLM excels at Similar Case Retrieval (SCR), Precedent Case Recommendation (PCR), and Legal Judgment Prediction (LJP)
We propose customized data preprocessing techniques for each task that transform raw legal data into a trainable format.
arXiv Detail & Related papers (2024-07-27T21:51:30Z) - Bridging the Transparency Gap: What Can Explainable AI Learn From the AI
Act? [0.8287206589886881]
European Union has introduced detailed requirements of transparency for AI systems.
There is a fundamental difference between XAI and the Act regarding what transparency is.
By comparing the disparate views of XAI and regulation, we arrive at four axes where practical work could bridge the transparency gap.
arXiv Detail & Related papers (2023-02-21T16:06:48Z) - On the Need and Applicability of Causality for Fairness: A Unified Framework for AI Auditing and Legal Analysis [0.0]
Article explores the significance of causal reasoning in addressing algorithmic discrimination.<n>By reviewing landmark cases and regulatory frameworks, we illustrate the challenges inherent in proving causal claims.
arXiv Detail & Related papers (2022-07-08T10:37:22Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.