INACIA: Integrating Large Language Models in Brazilian Audit Courts:
Opportunities and Challenges
- URL: http://arxiv.org/abs/2401.05273v3
- Date: Mon, 26 Feb 2024 17:22:21 GMT
- Title: INACIA: Integrating Large Language Models in Brazilian Audit Courts:
Opportunities and Challenges
- Authors: Jayr Pereira, Andre Assumpcao, Julio Trecenti, Luiz Airosa, Caio
Lente, Jhonatan Cl\'eto, Guilherme Dobins, Rodrigo Nogueira, Luis Mitchell,
Roberto Lotufo
- Abstract summary: INACIA is a groundbreaking system designed to integrate Large Language Models (LLMs) into the operational framework of Brazilian Federal Court of Accounts (TCU)
We demonstrate INACIA's potential in extracting relevant information from case documents, evaluating its legal plausibility, and formulating propositions for judicial decision-making.
- Score: 7.366861473623427
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces INACIA (Instru\c{c}\~ao Assistida com Intelig\^encia
Artificial), a groundbreaking system designed to integrate Large Language
Models (LLMs) into the operational framework of Brazilian Federal Court of
Accounts (TCU). The system automates various stages of case analysis, including
basic information extraction, admissibility examination, Periculum in mora and
Fumus boni iuris analyses, and recommendations generation. Through a series of
experiments, we demonstrate INACIA's potential in extracting relevant
information from case documents, evaluating its legal plausibility, and
formulating propositions for judicial decision-making. Utilizing a validation
dataset alongside LLMs, our evaluation methodology presents a novel approach to
assessing system performance, correlating highly with human judgment. These
results underscore INACIA's potential in complex legal task handling while also
acknowledging the current limitations. This study discusses possible
improvements and the broader implications of applying AI in legal contexts,
suggesting that INACIA represents a significant step towards integrating AI in
legal systems globally, albeit with cautious optimism grounded in the empirical
findings.
Related papers
- Augmenting Legal Decision Support Systems with LLM-based NLI for Analyzing Social Media Evidence [0.0]
This paper presents our system description and error analysis of our entry for NLLP 2024 shared task on Legal Natural Language Inference (L-NLI)
The task required classifying relationships as entailed, contradicted, or neutral, indicating any association between the review and the complaint.
Our system emerged as the winning submission, significantly outperforming other entries with a substantial margin and demonstrating the effectiveness of our approach in legal text analysis.
arXiv Detail & Related papers (2024-10-21T13:20:15Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - COMPL-AI Framework: A Technical Interpretation and LLM Benchmarking Suite for the EU Artificial Intelligence Act [40.233017376716305]
The EU's Artificial Intelligence Act (AI Act) is a significant step towards responsible AI development.
It lacks clear technical interpretation, making it difficult to assess models' compliance.
This work presents COMPL-AI, a comprehensive framework consisting of the first technical interpretation of the Act.
arXiv Detail & Related papers (2024-10-10T14:23:51Z) - An evidence-based methodology for human rights impact assessment (HRIA) in the development of AI data-intensive systems [49.1574468325115]
We show that human rights already underpin the decisions in the field of data use.
This work presents a methodology and a model for a Human Rights Impact Assessment (HRIA)
The proposed methodology is tested in concrete case-studies to prove its feasibility and effectiveness.
arXiv Detail & Related papers (2024-07-30T16:27:52Z) - Harnessing AI for efficient analysis of complex policy documents: a case study of Executive Order 14110 [44.99833362998488]
Policy documents, such as legislation, regulations, and executive orders, are crucial in shaping society.
This study aims to evaluate the potential of AI in streamlining policy analysis and to identify the strengths and limitations of current AI approaches.
arXiv Detail & Related papers (2024-06-10T11:19:28Z) - ROAST: Review-level Opinion Aspect Sentiment Target Joint Detection for ABSA [50.90538760832107]
This research presents a novel task, Review-Level Opinion Aspect Sentiment Target (ROAST)
ROAST seeks to close the gap between sentence-level and text-level ABSA by identifying every ABSA constituent at the review level.
We extend the available datasets to enable ROAST, addressing the drawbacks noted in previous research.
arXiv Detail & Related papers (2024-05-30T17:29:15Z) - Empowering Prior to Court Legal Analysis: A Transparent and Accessible Dataset for Defensive Statement Classification and Interpretation [5.646219481667151]
This paper introduces a novel dataset tailored for classification of statements made during police interviews, prior to court proceedings.
We introduce a fine-tuned DistilBERT model that achieves state-of-the-art performance in distinguishing truthful from deceptive statements.
We also present an XAI interface that empowers both legal professionals and non-specialists to interact with and benefit from our system.
arXiv Detail & Related papers (2024-05-17T11:22:27Z) - Rethinking Legal Compliance Automation: Opportunities with Large Language Models [2.9088208525097365]
We argue that the examination of (textual) legal artifacts should, first employ broader context than sentences.
We present a compliance analysis approach designed to address these limitations.
arXiv Detail & Related papers (2024-04-22T17:10:27Z) - Advancing Legal Reasoning: The Integration of AI to Navigate
Complexities and Biases in Global Jurisprudence with Semi-Automated
Arbitration Processes (SAAPs) [0.0]
This study focuses on the analysis of court judgments spanning five countries, including the United States, the United Kingdom, Rwanda, Sweden and Hong Kong.
By incorporating Advanced Language Models (ALMs) and a newly introduced human-AI collaborative framework, this paper seeks to analyze Grounded Theory-based research design with AI.
arXiv Detail & Related papers (2024-02-06T16:47:34Z) - A Comprehensive Evaluation of Large Language Models on Legal Judgment
Prediction [60.70089334782383]
Large language models (LLMs) have demonstrated great potential for domain-specific applications.
Recent disputes over GPT-4's law evaluation raise questions concerning their performance in real-world legal tasks.
We design practical baseline solutions based on LLMs and test on the task of legal judgment prediction.
arXiv Detail & Related papers (2023-10-18T07:38:04Z) - An Uncommon Task: Participatory Design in Legal AI [64.54460979588075]
We examine a notable yet understudied AI design process in the legal domain that took place over a decade ago.
We show how an interactive simulation methodology allowed computer scientists and lawyers to become co-designers.
arXiv Detail & Related papers (2022-03-08T15:46:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.