The Verification-Value Paradox: A Normative Critique of Gen AI in Legal Practice
- URL: http://arxiv.org/abs/2510.20109v1
- Date: Thu, 23 Oct 2025 01:26:37 GMT
- Title: The Verification-Value Paradox: A Normative Critique of Gen AI in Legal Practice
- Authors: Joshua Yuvaraj,
- Abstract summary: It is often claimed that machine learning-based generative AI products will drastically streamline and reduce the cost of legal practice.<n>This paper argues that a new paradigm is needed to evaluate AI use in practice.<n>Cases in Australia and elsewhere in which lawyers have been reprimanded for submitting inaccurate AI-generated content to courts suggest this paradigm must be revisited.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: It is often claimed that machine learning-based generative AI products will drastically streamline and reduce the cost of legal practice. This enthusiasm assumes lawyers can effectively manage AI's risks. Cases in Australia and elsewhere in which lawyers have been reprimanded for submitting inaccurate AI-generated content to courts suggest this paradigm must be revisited. This paper argues that a new paradigm is needed to evaluate AI use in practice, given (a) AI's disconnection from reality and its lack of transparency, and (b) lawyers' paramount duties like honesty, integrity, and not to mislead the court. It presents an alternative model of AI use in practice that more holistically reflects these features (the verification-value paradox). That paradox suggests increases in efficiency from AI use in legal practice will be met by a correspondingly greater imperative to manually verify any outputs of that use, rendering the net value of AI use often negligible to lawyers. The paper then sets out the paradox's implications for legal practice and legal education, including for AI use but also the values that the paradox suggests should undergird legal practice: fidelity to the truth and civic responsibility.
Related papers
- LegalOne: A Family of Foundation Models for Reliable Legal Reasoning [54.57434222018289]
We present LegalOne, a family of foundational models specifically tailored for the Chinese legal domain.<n>LegalOne is developed through a comprehensive three-phase pipeline designed to master legal reasoning.<n>We publicly release the LegalOne weights and the LegalKit evaluation framework to advance the field of Legal AI.
arXiv Detail & Related papers (2026-01-31T10:18:32Z) - Ethical Challenges of Using Artificial Intelligence in Judiciary [0.0]
AI has the potential to revolutionize the functioning of the judiciary and the dispensation of justice.<n>Courts around the world have begun embracing AI technology as a means to enhance the administration of justice.<n>However, the use of AI in the judiciary poses a range of ethical challenges.
arXiv Detail & Related papers (2025-04-27T15:51:56Z) - Artificial Intelligence in Election Campaigns: Perceptions, Penalties, and Implications [44.99833362998488]
We identify three categories of AI use -- campaign operations, voter outreach, and deception.<n>While people generally dislike AI in campaigns, they are especially critical of deceptive uses, which they perceive as norm violations.<n>Deception AI use increases public support for stricter AI regulation, including calls for an outright ban on AI development.
arXiv Detail & Related papers (2024-08-08T12:58:20Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Promises and pitfalls of artificial intelligence for legal applications [19.8511844390731]
We argue that this claim is not supported by the current evidence.
We dive into AI's increasingly prevalent roles in three types of legal tasks.
We make recommendations for better evaluation and deployment of AI in legal contexts.
arXiv Detail & Related papers (2024-01-10T19:50:37Z) - The Ethics of Automating Legal Actors [58.81546227716182]
We argue that automating the role of the judge raises difficult ethical challenges, in particular for common law legal systems.
Our argument follows from the social role of the judge in actively shaping the law, rather than merely applying it.
Even in the case the models could achieve human-level capabilities, there would still be remaining ethical concerns inherent in the automation of the legal process.
arXiv Detail & Related papers (2023-12-01T13:48:46Z) - Where's the Liability in Harmful AI Speech? [42.97651263209725]
Machine learning practitioners regularly "red team" models to identify problematic speech.
We examine three liability regimes, tying them to common examples of red-teamed model behaviors.
We argue that AI should not be categorically immune from liability in these scenarios.
arXiv Detail & Related papers (2023-08-09T00:13:00Z) - Training Is Everything: Artificial Intelligence, Copyright, and Fair
Training [9.653656920225858]
Authors: Companies that use such content to train their AI engine often believe such usage should be considered "fair use"
Authors: Copyright owners, as well as their supporters, consider the incorporation of copyrighted works into training sets for AI to constitute misappropriation of owners' intellectual property.
We identify both strong and spurious arguments on both sides of this debate.
arXiv Detail & Related papers (2023-05-04T04:01:00Z) - Law Informs Code: A Legal Informatics Approach to Aligning Artificial
Intelligence with Humans [0.0]
Law-making and legal interpretation form a computational engine that converts opaque human values into legible directives.
"Law Informs Code" is the research agenda capturing complex computational legal processes, and embedding them in AI.
arXiv Detail & Related papers (2022-09-14T00:49:09Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - AI and Legal Argumentation: Aligning the Autonomous Levels of AI Legal
Reasoning [0.0]
Legal argumentation is a vital cornerstone of justice, underpinning an adversarial form of law.
Extensive research has attempted to augment or undertake legal argumentation via the use of computer-based automation including Artificial Intelligence (AI)
An innovative meta-approach is proposed to apply the Levels of Autonomy (LoA) of AI Legal Reasoning to the maturation of AI and Legal Argumentation (AILA)
arXiv Detail & Related papers (2020-09-11T22:05:40Z) - How Does NLP Benefit Legal System: A Summary of Legal Artificial
Intelligence [81.04070052740596]
Legal Artificial Intelligence (LegalAI) focuses on applying the technology of artificial intelligence, especially natural language processing, to benefit tasks in the legal domain.
This paper introduces the history, the current state, and the future directions of research in LegalAI.
arXiv Detail & Related papers (2020-04-25T14:45:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.