Equality before the Law: Legal Judgment Consistency Analysis for
Fairness
- URL: http://arxiv.org/abs/2103.13868v1
- Date: Thu, 25 Mar 2021 14:28:00 GMT
- Title: Equality before the Law: Legal Judgment Consistency Analysis for
Fairness
- Authors: Yuzhong Wang, Chaojun Xiao, Shirong Ma, Haoxi Zhong, Cunchao Tu,
Tianyang Zhang, Zhiyuan Liu, Maosong Sun
- Abstract summary: In this paper, we propose an evaluation metric for judgment inconsistency, Legal Inconsistency Coefficient (LInCo)
We simulate judges from different groups with legal judgment prediction (LJP) models and measure the judicial inconsistency with the disagreement of the judgment results given by LJP models trained on different groups.
We employ LInCo to explore the inconsistency in real cases and come to the following observations: (1) Both regional and gender inconsistency exist in the legal system, but gender inconsistency is much less than regional inconsistency.
- Score: 55.91612739713396
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In a legal system, judgment consistency is regarded as one of the most
important manifestations of fairness. However, due to the complexity of factual
elements that impact sentencing in real-world scenarios, few works have been
done on quantitatively measuring judgment consistency towards real-world data.
In this paper, we propose an evaluation metric for judgment inconsistency,
Legal Inconsistency Coefficient (LInCo), which aims to evaluate inconsistency
between data groups divided by specific features (e.g., gender, region, race).
We propose to simulate judges from different groups with legal judgment
prediction (LJP) models and measure the judicial inconsistency with the
disagreement of the judgment results given by LJP models trained on different
groups. Experimental results on the synthetic data verify the effectiveness of
LInCo. We further employ LInCo to explore the inconsistency in real cases and
come to the following observations: (1) Both regional and gender inconsistency
exist in the legal system, but gender inconsistency is much less than regional
inconsistency; (2) The level of regional inconsistency varies little across
different time periods; (3) In general, judicial inconsistency is negatively
correlated with the severity of the criminal charges. Besides, we use LInCo to
evaluate the performance of several de-bias methods, such as adversarial
learning, and find that these mechanisms can effectively help LJP models to
avoid suffering from data bias.
Related papers
- Multi-Defendant Legal Judgment Prediction via Hierarchical Reasoning [49.23103067844278]
We propose the task of multi-defendant LJP, which aims to automatically predict the judgment results for each defendant of multi-defendant cases.
Two challenges arise with the task of multi-defendant LJP: (1) indistinguishable judgment results among various defendants; and (2) the lack of a real-world dataset for training and evaluation.
arXiv Detail & Related papers (2023-12-10T04:46:30Z) - MUSER: A Multi-View Similar Case Retrieval Dataset [65.36779942237357]
Similar case retrieval (SCR) is a representative legal AI application that plays a pivotal role in promoting judicial fairness.
Existing SCR datasets only focus on the fact description section when judging the similarity between cases.
We present M, a similar case retrieval dataset based on multi-view similarity measurement and comprehensive legal element with sentence-level legal element annotations.
arXiv Detail & Related papers (2023-10-24T08:17:11Z) - Compatibility of Fairness Metrics with EU Non-Discrimination Laws:
Demographic Parity & Conditional Demographic Disparity [3.5607241839298878]
Empirical evidence suggests that algorithmic decisions driven by Machine Learning (ML) techniques threaten to discriminate against legally protected groups or create new sources of unfairness.
This work aims at assessing up to what point we can assure legal fairness through fairness metrics and under fairness constraints.
Our experiments and analysis suggest that AI-assisted decision-making can be fair from a legal perspective depending on the case at hand and the legal justification.
arXiv Detail & Related papers (2023-06-14T09:38:05Z) - Beyond Incompatibility: Trade-offs between Mutually Exclusive Fairness Criteria in Machine Learning and Law [2.959308758321417]
We present a novel algorithm (FAir Interpolation Method: FAIM) for continuously interpolating between three fairness criteria.
We demonstrate the effectiveness of our algorithm when applied to synthetic data, the COMPAS data set, and a new, real-world data set from the e-commerce sector.
arXiv Detail & Related papers (2022-12-01T12:47:54Z) - Exploiting Contrastive Learning and Numerical Evidence for Confusing
Legal Judgment Prediction [46.71918729837462]
Given the fact description text of a legal case, legal judgment prediction aims to predict the case's charge, law article and penalty term.
Previous studies fail to distinguish different classification errors with a standard cross-entropy classification loss.
We propose a moco-based supervised contrastive learning to learn distinguishable representations.
We further enhance the representation of the fact description with extracted crime amounts which are encoded by a pre-trained numeracy model.
arXiv Detail & Related papers (2022-11-15T15:53:56Z) - Fair Machine Learning in Healthcare: A Review [90.22219142430146]
We analyze the intersection of fairness in machine learning and healthcare disparities.
We provide a critical review of the associated fairness metrics from a machine learning standpoint.
We propose several new research directions that hold promise for developing ethical and equitable ML applications in healthcare.
arXiv Detail & Related papers (2022-06-29T04:32:10Z) - Addressing Fairness, Bias and Class Imbalance in Machine Learning: the
FBI-loss [11.291571222801027]
We propose a unified loss correction to address issues related to Fairness, Biases and Imbalances (FBI-loss)
The correction capabilities of the proposed approach are assessed on three real-world benchmarks.
arXiv Detail & Related papers (2021-05-13T15:01:14Z) - Algorithmic Decision Making with Conditional Fairness [48.76267073341723]
We define conditional fairness as a more sound fairness metric by conditioning on the fairness variables.
We propose a Derivable Conditional Fairness Regularizer (DCFR) to track the trade-off between precision and fairness of algorithmic decision making.
arXiv Detail & Related papers (2020-06-18T12:56:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.