AppealCase: A Dataset and Benchmark for Civil Case Appeal Scenarios
- URL: http://arxiv.org/abs/2505.16514v2
- Date: Sun, 25 May 2025 11:30:02 GMT
- Title: AppealCase: A Dataset and Benchmark for Civil Case Appeal Scenarios
- Authors: Yuting Huang, Meitong Guo, Yiquan Wu, Ang Li, Xiaozhong Liu, Keting Yin, Changlong Sun, Fei Wu, Kun Kuang,
- Abstract summary: We present the AppealCase dataset, consisting of 10,000 pairs of real-world, matched first-instance and second-instance documents across 91 categories of civil cases.<n>The dataset also includes detailed annotations along five dimensions central to appellate review: judgment reversals, reversal reasons, cited legal provisions, claim-level decisions, and whether there is new information in the second instance.<n> Experimental results reveal that all current models achieve less than 50% F1 scores on the judgment reversal prediction task, highlighting the complexity and challenge of the appeal scenario.
- Score: 47.83822985839837
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recent advances in LegalAI have primarily focused on individual case judgment analysis, often overlooking the critical appellate process within the judicial system. Appeals serve as a core mechanism for error correction and ensuring fair trials, making them highly significant both in practice and in research. To address this gap, we present the AppealCase dataset, consisting of 10,000 pairs of real-world, matched first-instance and second-instance documents across 91 categories of civil cases. The dataset also includes detailed annotations along five dimensions central to appellate review: judgment reversals, reversal reasons, cited legal provisions, claim-level decisions, and whether there is new information in the second instance. Based on these annotations, we propose five novel LegalAI tasks and conduct a comprehensive evaluation across 20 mainstream models. Experimental results reveal that all current models achieve less than 50% F1 scores on the judgment reversal prediction task, highlighting the complexity and challenge of the appeal scenario. We hope that the AppealCase dataset will spur further research in LegalAI for appellate case analysis and contribute to improving consistency in judicial decision-making.
Related papers
- ASP2LJ : An Adversarial Self-Play Laywer Augmented Legal Judgment Framework [21.003203706712643]
Legal Judgment Prediction (LJP) aims to predict judicial outcomes, including relevant legal charge, terms, and fines.<n>Current datasets, derived from authentic cases, suffer from high human annotation costs and imbalanced distributions.<n>We propose an Adversarial Self-Play Lawyer Augmented Legal Judgment Framework, called ASP2LJ.<n>Our framework enables a judge to reference evolved lawyers' arguments, improving the objectivity, fairness, and rationality of judicial decisions.
arXiv Detail & Related papers (2025-06-11T06:55:40Z) - LegalSearchLM: Rethinking Legal Case Retrieval as Legal Elements Generation [5.243460995467895]
We present LEGAR BENCH, the first large-scale Korean Legal Case Retrieval benchmark, covering 411 diverse crime types in queries over 1.2M legal cases.<n>We also present LegalSearchLM, a retrieval model that performs legal element reasoning over the query case and directly generates content grounded in the target cases.
arXiv Detail & Related papers (2025-05-28T09:02:41Z) - AnnoCaseLaw: A Richly-Annotated Dataset For Benchmarking Explainable Legal Judgment Prediction [56.797874973414636]
AnnoCaseLaw is a first-of-its-kind dataset of 471 meticulously annotated U.S. Appeals Court negligence cases.<n>Our dataset lays the groundwork for more human-aligned, explainable Legal Judgment Prediction models.<n>Results demonstrate that LJP remains a formidable task, with application of legal precedent proving particularly difficult.
arXiv Detail & Related papers (2025-02-28T19:14:48Z) - JudgeRank: Leveraging Large Language Models for Reasoning-Intensive Reranking [81.88787401178378]
We introduce JudgeRank, a novel agentic reranker that emulates human cognitive processes when assessing document relevance.
We evaluate JudgeRank on the reasoning-intensive BRIGHT benchmark, demonstrating substantial performance improvements over first-stage retrieval methods.
In addition, JudgeRank performs on par with fine-tuned state-of-the-art rerankers on the popular BEIR benchmark, validating its zero-shot generalization capability.
arXiv Detail & Related papers (2024-10-31T18:43:12Z) - AgentsCourt: Building Judicial Decision-Making Agents with Court Debate Simulation and Legal Knowledge Augmentation [19.733007669738008]
We propose a novel multi-agent framework, AgentsCourt, for judicial decision-making.
Our framework follows the classic court trial process, consisting of court debate simulation, legal resources retrieval and decision-making refinement.
To support this task, we construct a large-scale legal knowledge base, Legal-KB, with multi-resource legal knowledge.
arXiv Detail & Related papers (2024-03-05T13:30:02Z) - PILOT: Legal Case Outcome Prediction with Case Law [43.680862577060765]
We identify two unique challenges in making legal case outcome predictions with case law.
First, it is crucial to identify relevant precedent cases that serve as fundamental evidence for judges during decision-making.
Second, it is necessary to consider the evolution of legal principles over time, as early cases may adhere to different legal contexts.
arXiv Detail & Related papers (2024-01-28T21:18:05Z) - Multi-Defendant Legal Judgment Prediction via Hierarchical Reasoning [49.23103067844278]
We propose the task of multi-defendant LJP, which aims to automatically predict the judgment results for each defendant of multi-defendant cases.
Two challenges arise with the task of multi-defendant LJP: (1) indistinguishable judgment results among various defendants; and (2) the lack of a real-world dataset for training and evaluation.
arXiv Detail & Related papers (2023-12-10T04:46:30Z) - Fact-based Court Judgment Prediction [0.5439020425819]
This extended abstract focuses on fact-based judgment prediction within the context of Indian legal documents.
We introduce two distinct problem variations: one based solely on facts, and another combining facts with rulings from lower courts (RLC)
Our research aims to enhance early-phase case outcome prediction, offering significant benefits to legal professionals and the general public.
arXiv Detail & Related papers (2023-11-22T12:39:28Z) - MUSER: A Multi-View Similar Case Retrieval Dataset [65.36779942237357]
Similar case retrieval (SCR) is a representative legal AI application that plays a pivotal role in promoting judicial fairness.
Existing SCR datasets only focus on the fact description section when judging the similarity between cases.
We present M, a similar case retrieval dataset based on multi-view similarity measurement and comprehensive legal element with sentence-level legal element annotations.
arXiv Detail & Related papers (2023-10-24T08:17:11Z) - Automated Refugee Case Analysis: An NLP Pipeline for Supporting Legal
Practitioners [0.0]
We introduce an end-to-end pipeline for retrieving, processing, and extracting targeted information from legal cases.
We investigate an under-studied legal domain with a case study on refugee law in Canada.
arXiv Detail & Related papers (2023-05-24T19:37:23Z) - Sequential Multi-task Learning with Task Dependency for Appeal Judgment
Prediction [28.505366852202794]
Legal Judgment Prediction (LJP) aims to automatically predict judgment results, such as charges, relevant law articles, and the term of penalty.
This paper concerns a worthwhile but not well-studied LJP task, Appeal judgment Prediction (AJP), which predicts the judgment of an appellate court on an appeal case.
We propose a Sequential Multi-task Learning Framework with Task Dependency for Appeal Judgement Prediction (SMAJudge) to address these challenges.
arXiv Detail & Related papers (2022-03-09T08:51:13Z) - Legal Judgment Prediction with Multi-Stage CaseRepresentation Learning
in the Real Court Setting [25.53133777558123]
We introduce a novel dataset from real courtrooms to predict the legal judgment in a reasonably encyclopedic manner.
An extensive set of experiments with a large civil trial data set shows that the proposed model can more accurately characterize the interactions among claims, fact and debate for legal judgment prediction.
arXiv Detail & Related papers (2021-07-12T04:27:14Z) - Equality before the Law: Legal Judgment Consistency Analysis for
Fairness [55.91612739713396]
In this paper, we propose an evaluation metric for judgment inconsistency, Legal Inconsistency Coefficient (LInCo)
We simulate judges from different groups with legal judgment prediction (LJP) models and measure the judicial inconsistency with the disagreement of the judgment results given by LJP models trained on different groups.
We employ LInCo to explore the inconsistency in real cases and come to the following observations: (1) Both regional and gender inconsistency exist in the legal system, but gender inconsistency is much less than regional inconsistency.
arXiv Detail & Related papers (2021-03-25T14:28:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.