"Make It Sound Like a Lawyer Wrote It": Scenarios of Potential Impacts of Generative AI for Legal Conflict Resolution
- URL: http://arxiv.org/abs/2602.24130v1
- Date: Fri, 27 Feb 2026 16:07:39 GMT
- Title: "Make It Sound Like a Lawyer Wrote It": Scenarios of Potential Impacts of Generative AI for Legal Conflict Resolution
- Authors: Kimon Kieslich, Natali Helberger, Nicholas Diakopoulos,
- Abstract summary: We surveyed participants in the EU and US about the potential impact of generative AI on legal conflict resolution.<n>We analysed the prevalence of risk and benefit themes, as well as the types of anticipated legal tasks.<n>We describe the emerging trade-offs that will affect decision-makers in the legal sector.
- Score: 3.4902614817528157
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative AI (GenAI) tools are transforming critical societal domains, including the legal sector. While these tools create opportunities such as increased efficiency and potential improvements in access to justice, they also present new challenges, such as the risk of inaccurate legal advice and questions about the legitimacy of legal decisions. However, the full impact remains to be seen and ultimately depends on the way GenAI tools are implemented and used by both, legal professionals and citizens. This makes anticipating and managing the positive and negative impacts of GenAI use in the legal domain challenging but also important to guide the digital transformation of the legal sector into a societally desirable direction. In this paper, we set out to explore the spectrum of possible impacts of GenAI in the legal domain, examining how this technology is anticipated being used and the potential implications this might have for the legal sector and society. Using a scenario writing method, we surveyed participants in the EU and US including both citizens and legal professionals about the potential impact of generative AI on legal conflict resolution. Respondents were tasked with writing a narrative drawing on their experience or expertise about a future in which AI is used throughout the legal process. We qualitatively analysed the prevalence of risk and benefit themes, as well as the types of anticipated legal tasks. We then compared these findings based on expertise status (legal experts versus citizens) and regional regulatory background (the EU with the EU AI Act versus the US with an industry self-regulatory approach). Finally, we describe the emerging trade-offs that will affect decision-makers in the legal sector.
Related papers
- Trade-Offs in Deploying Legal AI: Insights from a Public Opinion Study to Guide AI Risk Management [3.7782691747398913]
Generative AI tools are increasingly used for legal tasks.<n>The EU mandates risk assessment and audits before market introduction for some use cases.<n>Other use cases do not fall under the AI Acts' high-risk classifications.
arXiv Detail & Related papers (2026-02-10T10:32:40Z) - LegalOne: A Family of Foundation Models for Reliable Legal Reasoning [54.57434222018289]
We present LegalOne, a family of foundational models specifically tailored for the Chinese legal domain.<n>LegalOne is developed through a comprehensive three-phase pipeline designed to master legal reasoning.<n>We publicly release the LegalOne weights and the LegalKit evaluation framework to advance the field of Legal AI.
arXiv Detail & Related papers (2026-01-31T10:18:32Z) - Ethical Challenges of Using Artificial Intelligence in Judiciary [0.0]
AI has the potential to revolutionize the functioning of the judiciary and the dispensation of justice.<n>Courts around the world have begun embracing AI technology as a means to enhance the administration of justice.<n>However, the use of AI in the judiciary poses a range of ethical challenges.
arXiv Detail & Related papers (2025-04-27T15:51:56Z) - Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders [59.17981603969404]
The responsible AI (RAI) community has introduced numerous processes and artifacts to facilitate transparency and support the governance of AI systems.
We conduct semi-structured interviews with 19 government, legal, and civil society stakeholders who inform policy and advocacy around responsible AI efforts.
We organize these beliefs into four barriers that help explain how RAI artifacts may (inadvertently) reconfigure power relations across civil society, government, and industry.
arXiv Detail & Related papers (2024-08-22T00:14:37Z) - Unsettled Law: Time to Generate New Approaches? [1.3651236252124068]
We identify several important and unsettled legal questions with profound ethical and societal implications arising from generative artificial intelligence (GenAI)
Our key contribution is formally identifying the issues that are unique to GenAI so scholars, practitioners, and others can conduct more useful investigations and discussions.
We argue that GenAI's unique attributes, including its general-purpose nature, reliance on massive datasets, and potential for both pervasive societal benefits and harms, necessitate a re-evaluation of existing legal paradigms.
arXiv Detail & Related papers (2024-07-02T05:51:41Z) - Securing the Future of GenAI: Policy and Technology [50.586585729683776]
Governments globally are grappling with the challenge of regulating GenAI, balancing innovation against safety.
A workshop co-organized by Google, University of Wisconsin, Madison, and Stanford University aimed to bridge this gap between GenAI policy and technology.
This paper summarizes the discussions during the workshop which addressed questions, such as: How regulation can be designed without hindering technological progress?
arXiv Detail & Related papers (2024-05-21T20:30:01Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Lawformer: A Pre-trained Language Model for Chinese Legal Long Documents [56.40163943394202]
We release the Longformer-based pre-trained language model, named as Lawformer, for Chinese legal long documents understanding.
We evaluate Lawformer on a variety of LegalAI tasks, including judgment prediction, similar case retrieval, legal reading comprehension, and legal question answering.
arXiv Detail & Related papers (2021-05-09T09:39:25Z) - AI and Legal Argumentation: Aligning the Autonomous Levels of AI Legal
Reasoning [0.0]
Legal argumentation is a vital cornerstone of justice, underpinning an adversarial form of law.
Extensive research has attempted to augment or undertake legal argumentation via the use of computer-based automation including Artificial Intelligence (AI)
An innovative meta-approach is proposed to apply the Levels of Autonomy (LoA) of AI Legal Reasoning to the maturation of AI and Legal Argumentation (AILA)
arXiv Detail & Related papers (2020-09-11T22:05:40Z) - How Does NLP Benefit Legal System: A Summary of Legal Artificial
Intelligence [81.04070052740596]
Legal Artificial Intelligence (LegalAI) focuses on applying the technology of artificial intelligence, especially natural language processing, to benefit tasks in the legal domain.
This paper introduces the history, the current state, and the future directions of research in LegalAI.
arXiv Detail & Related papers (2020-04-25T14:45:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.