Trade-Offs in Deploying Legal AI: Insights from a Public Opinion Study to Guide AI Risk Management
- URL: http://arxiv.org/abs/2602.09636v1
- Date: Tue, 10 Feb 2026 10:32:40 GMT
- Title: Trade-Offs in Deploying Legal AI: Insights from a Public Opinion Study to Guide AI Risk Management
- Authors: Kimon Kieslich, Sophie Morosoli, Nicholas Diakopoulos, Natali Helberger,
- Abstract summary: Generative AI tools are increasingly used for legal tasks.<n>The EU mandates risk assessment and audits before market introduction for some use cases.<n>Other use cases do not fall under the AI Acts' high-risk classifications.
- Score: 3.7782691747398913
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative AI tools are increasingly used for legal tasks, including legal research, drafting documents, and even for legal decision-making. As for other purposes, the use of GenAI in the legal domain comes with various risks and benefits that needs to be properly managed to ensure implementation in a way that serves public values and protect human rights. While the EU mandates risk assessment and audits before market introduction for some use cases (e.g., use by judges for administration of justice) other use cases do not fall under the AI Acts' high-risk classifications (e.g., use by citizens for legal consultation or drafting documents). Further, current risk management practices prioritize expert judgment on risk factor identification and prioritization without a corresponding legal requirement to consult with affected communities. Seeing the societal importance of the legal sector and the potentially transformative impact of GenAI in this sector, the acceptability and legitimacy of GenAI solutions also depends on public perceptions and a better understanding of the risks and benefits citizens associated with the use of AI in the legal sector. As a response, this papers presents data from a representative sample of German citizens (n=488) outlining citizens' perspectives on the use of GenAI for two legal tasks: legal consultation and legal mediation. Concretely, we i) systematically map risks and benefit factors for both legal tasks, ii) describe predictors that influence risk acceptance of the use of GenAI for those tasks, and iii) highlight emerging trade-off themes that citizens engage in when weighing up risk acceptability. Our results provides an empirical overview of citizens' concerns regarding risk management of GenAI for the legal domain, foregrounding critical themes that complement current risk assessment procedures.
Related papers
- "Make It Sound Like a Lawyer Wrote It": Scenarios of Potential Impacts of Generative AI for Legal Conflict Resolution [3.4902614817528157]
We surveyed participants in the EU and US about the potential impact of generative AI on legal conflict resolution.<n>We analysed the prevalence of risk and benefit themes, as well as the types of anticipated legal tasks.<n>We describe the emerging trade-offs that will affect decision-makers in the legal sector.
arXiv Detail & Related papers (2026-02-27T16:07:39Z) - Large Language Models' Complicit Responses to Illicit Instructions across Socio-Legal Contexts [54.15982476754607]
Large language models (LLMs) are now deployed at unprecedented scale, assisting millions of users in daily tasks.<n>This study defines complicit facilitation as the provision of guidance or support that enables illicit user instructions.<n>Using real-world legal cases and established legal frameworks, we construct an evaluation benchmark spanning 269 illicit scenarios and 50 illicit intents.
arXiv Detail & Related papers (2025-11-25T16:01:31Z) - An Overview of the Risk-based Model of AI Governance [0.0]
The 'Analysis' section proposes several criticisms of the risk based approach to AI governance.<n>It argues that the notion of risk is problematic as its inherent normativity reproduces dominant and harmful narratives about whose interests matter.<n>This paper concludes with the suggestion that existing risk governance scholarship can provide valuable insights toward the improvement of the risk based AI governance.
arXiv Detail & Related papers (2025-07-21T06:56:04Z) - Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders [59.17981603969404]
The responsible AI (RAI) community has introduced numerous processes and artifacts to facilitate transparency and support the governance of AI systems.
We conduct semi-structured interviews with 19 government, legal, and civil society stakeholders who inform policy and advocacy around responsible AI efforts.
We organize these beliefs into four barriers that help explain how RAI artifacts may (inadvertently) reconfigure power relations across civil society, government, and industry.
arXiv Detail & Related papers (2024-08-22T00:14:37Z) - Unsettled Law: Time to Generate New Approaches? [1.3651236252124068]
We identify several important and unsettled legal questions with profound ethical and societal implications arising from generative artificial intelligence (GenAI)
Our key contribution is formally identifying the issues that are unique to GenAI so scholars, practitioners, and others can conduct more useful investigations and discussions.
We argue that GenAI's unique attributes, including its general-purpose nature, reliance on massive datasets, and potential for both pervasive societal benefits and harms, necessitate a re-evaluation of existing legal paradigms.
arXiv Detail & Related papers (2024-07-02T05:51:41Z) - Evaluating AI for Law: Bridging the Gap with Open-Source Solutions [32.550204238857724]
This study evaluates the performance of general-purpose AI, like ChatGPT, in legal question-answering tasks.
It suggests leveraging foundational models enhanced by domain-specific knowledge to overcome these issues.
arXiv Detail & Related papers (2024-04-18T17:26:01Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - A Safe Harbor for AI Evaluation and Red Teaming [124.89885800509505]
Some researchers fear that conducting such research or releasing their findings will result in account suspensions or legal reprisal.
We propose that major AI developers commit to providing a legal and technical safe harbor.
We believe these commitments are a necessary step towards more inclusive and unimpeded community efforts to tackle the risks of generative AI.
arXiv Detail & Related papers (2024-03-07T20:55:08Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Statutory Professions in AI governance and their consequences for
explainable AI [2.363388546004777]
Intentional and accidental harms arising from the use of AI have impacted the health, safety and rights of individuals.
We propose that a statutory profession framework be introduced as a necessary part of the AI regulatory framework.
arXiv Detail & Related papers (2023-06-15T08:51:28Z) - Normative Challenges of Risk Regulation of Artificial Intelligence and
Automated Decision-Making [0.0]
Recent proposals aim at regulating artificial intelligence (AI) and automated decision-making (ADM)
The most salient example is the Artificial Intelligence Act (AIA) proposed by the European Commission.
This article addresses challenges for adequate risk regulation that arise primarily from the specific type of risks involved.
arXiv Detail & Related papers (2022-11-11T13:57:38Z) - How Does NLP Benefit Legal System: A Summary of Legal Artificial
Intelligence [81.04070052740596]
Legal Artificial Intelligence (LegalAI) focuses on applying the technology of artificial intelligence, especially natural language processing, to benefit tasks in the legal domain.
This paper introduces the history, the current state, and the future directions of research in LegalAI.
arXiv Detail & Related papers (2020-04-25T14:45:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.