Deconfounding Legal Judgment Prediction for European Court of Human
Rights Cases Towards Better Alignment with Experts
- URL: http://arxiv.org/abs/2210.13836v1
- Date: Tue, 25 Oct 2022 08:37:25 GMT
- Title: Deconfounding Legal Judgment Prediction for European Court of Human
Rights Cases Towards Better Alignment with Experts
- Authors: T.Y.S.S Santosh, Shanshan Xu, Oana Ichim and Matthias Grabmair
- Abstract summary: This work demonstrates that Legal Judgement Prediction systems without expert-informed adjustments can be vulnerable to shallow, distracting surface signals.
To mitigate this, we use domain expertise to strategically identify statistically predictive but legally irrelevant information.
- Score: 1.252149409594807
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work demonstrates that Legal Judgement Prediction systems without
expert-informed adjustments can be vulnerable to shallow, distracting surface
signals that arise from corpus construction, case distribution, and confounding
factors. To mitigate this, we use domain expertise to strategically identify
statistically predictive but legally irrelevant information. We adopt
adversarial training to prevent the system from relying on it. We evaluate our
deconfounded models by employing interpretability techniques and comparing to
expert annotations. Quantitative experiments and qualitative analysis show that
our deconfounded model consistently aligns better with expert rationales than
baselines trained for prediction only. We further contribute a set of reference
expert annotations to the validation and testing partitions of an existing
benchmark dataset of European Court of Human Rights cases.
Related papers
- Achieving Fairness in Predictive Process Analytics via Adversarial Learning [50.31323204077591]
This paper addresses the challenge of integrating a debiasing phase into predictive business process analytics.
Our framework leverages on adversial debiasing is evaluated on four case studies, showing a significant reduction in the contribution of biased variables to the predicted value.
arXiv Detail & Related papers (2024-10-03T15:56:03Z) - Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - Enabling Discriminative Reasoning in LLMs for Legal Judgment Prediction [23.046342240176575]
We introduce the Ask-Discriminate-Predict (ADAPT) reasoning framework inspired by human reasoning.
ADAPT involves decomposing case facts, discriminating among potential charges, and predicting the final judgment.
Experiments conducted on two widely-used datasets demonstrate the superior performance of our framework in legal judgment prediction.
arXiv Detail & Related papers (2024-07-02T05:43:15Z) - Legal Judgment Reimagined: PredEx and the Rise of Intelligent AI Interpretation in Indian Courts [6.339932924789635]
textbfPrediction with textbfExplanation (textttPredEx) is the largest expert-annotated dataset for legal judgment prediction and explanation in the Indian context.
This corpus significantly enhances the training and evaluation of AI models in legal analysis.
arXiv Detail & Related papers (2024-06-06T14:57:48Z) - Backdoor-based Explainable AI Benchmark for High Fidelity Evaluation of Attribution Methods [49.62131719441252]
Attribution methods compute importance scores for input features to explain the output predictions of deep models.
In this work, we first identify a set of fidelity criteria that reliable benchmarks for attribution methods are expected to fulfill.
We then introduce a Backdoor-based eXplainable AI benchmark (BackX) that adheres to the desired fidelity criteria.
arXiv Detail & Related papers (2024-05-02T13:48:37Z) - Towards Explainability and Fairness in Swiss Judgement Prediction:
Benchmarking on a Multilingual Dataset [2.7463268699570134]
This study delves into the realm of explainability and fairness in Legal Judgement Prediction (LJP) models.
We evaluate the explainability performance of state-of-the-art monolingual and multilingual BERT-based LJP models.
We introduce a novel evaluation framework, Lower Court Insertion (LCI), which allows us to quantify the influence of lower court information on model predictions.
arXiv Detail & Related papers (2024-02-26T20:42:40Z) - VECHR: A Dataset for Explainable and Robust Classification of
Vulnerability Type in the European Court of Human Rights [2.028075209232085]
We present VECHR, a novel expert-annotated multi-label dataset of vulnerability type classification and explanation rationale.
We benchmark the performance of state-of-the-art models on VECHR from both prediction and explainability perspectives.
Our dataset poses unique challenges offering significant room for improvement regarding performance, explainability, and robustness.
arXiv Detail & Related papers (2023-10-17T16:05:52Z) - Debiasing Recommendation by Learning Identifiable Latent Confounders [49.16119112336605]
Confounding bias arises due to the presence of unmeasured variables that can affect both a user's exposure and feedback.
Existing methods either (1) make untenable assumptions about these unmeasured variables or (2) directly infer latent confounders from users' exposure.
We propose a novel method, i.e., identifiable deconfounder (iDCF), which leverages a set of proxy variables to resolve the aforementioned non-identification issue.
arXiv Detail & Related papers (2023-02-10T05:10:26Z) - Leveraging task dependency and contrastive learning for Legal Judgement
Prediction on the European Court of Human Rights [1.252149409594807]
We report on an experiment in legal judgement prediction on European Court of Human Rights cases.
Our models produce a small but consistent improvement in prediction performance over single-task and joint models without contrastive loss.
arXiv Detail & Related papers (2023-02-01T21:38:47Z) - Excess risk analysis for epistemic uncertainty with application to
variational inference [110.4676591819618]
We present a novel EU analysis in the frequentist setting, where data is generated from an unknown distribution.
We show a relation between the generalization ability and the widely used EU measurements, such as the variance and entropy of the predictive distribution.
We propose new variational inference that directly controls the prediction and EU evaluation performances based on the PAC-Bayesian theory.
arXiv Detail & Related papers (2022-06-02T12:12:24Z) - Unsupervised Learning of Debiased Representations with Pseudo-Attributes [85.5691102676175]
We propose a simple but effective debiasing technique in an unsupervised manner.
We perform clustering on the feature embedding space and identify pseudoattributes by taking advantage of the clustering results.
We then employ a novel cluster-based reweighting scheme for learning debiased representation.
arXiv Detail & Related papers (2021-08-06T05:20:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.