Predicting Court Decisions for Alimony: Avoiding Extra-legal Factors in
Decision made by Judges and Not Understandable AI Models
- URL: http://arxiv.org/abs/2007.04824v1
- Date: Thu, 9 Jul 2020 14:14:20 GMT
- Title: Predicting Court Decisions for Alimony: Avoiding Extra-legal Factors in
Decision made by Judges and Not Understandable AI Models
- Authors: Fabrice Muhlenbach, Long Nguyen Phuoc and Isabelle Sayn
- Abstract summary: We present an explainable AI model designed in this purpose by combining a classification with random forest and a regression model.
By using a large amount of court decisions in matters of divorce produced by French jurisdictions, we seek to identify if there may be extra-legal factors in the decisions taken by the judges.
- Score: 0.02578242050187029
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The advent of machine learning techniques has made it possible to obtain
predictive systems that have overturned traditional legal practices. However,
rather than leading to systems seeking to replace humans, the search for the
determinants in a court decision makes it possible to give a better
understanding of the decision mechanisms carried out by the judge. By using a
large amount of court decisions in matters of divorce produced by French
jurisdictions and by looking at the variables that allow to allocate an alimony
or not, and to define its amount, we seek to identify if there may be
extra-legal factors in the decisions taken by the judges. From this
perspective, we present an explainable AI model designed in this purpose by
combining a classification with random forest and a regression model, as a
complementary tool to existing decision-making scales or guidelines created by
practitioners.
Related papers
- Using LLMs to Discover Legal Factors [0.6249768559720122]
We use large language models to discover factors that effectively represent a legal domain.
Our method takes as input raw court opinions and produces a set of factors and associated definitions.
We demonstrate that a semi-automated approach, incorporating minimal human involvement, produces factor representations that can predict case outcomes with moderate success.
arXiv Detail & Related papers (2024-10-10T00:42:10Z) - (Un)certainty of (Un)fairness: Preference-Based Selection of Certainly Fair Decision-Makers [0.0]
Fairness metrics are used to assess discrimination and bias in decision-making processes across various domains.
We quantify the uncertainty of the disparity to enhance discrimination assessments.
We define preferences over decision-makers and utilize brute-force to choose the optimal decision-maker.
arXiv Detail & Related papers (2024-09-19T11:44:03Z) - Towards Explainability in Legal Outcome Prediction Models [64.00172507827499]
We argue that precedent is a natural way of facilitating explainability for legal NLP models.
By developing a taxonomy of legal precedent, we are able to compare human judges and neural models.
We find that while the models learn to predict outcomes reasonably well, their use of precedent is unlike that of human judges.
arXiv Detail & Related papers (2024-03-25T15:15:41Z) - The Ethics of Automating Legal Actors [58.81546227716182]
We argue that automating the role of the judge raises difficult ethical challenges, in particular for common law legal systems.
Our argument follows from the social role of the judge in actively shaping the law, rather than merely applying it.
Even in the case the models could achieve human-level capabilities, there would still be remaining ethical concerns inherent in the automation of the legal process.
arXiv Detail & Related papers (2023-12-01T13:48:46Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - A Justice-Based Framework for the Analysis of Algorithmic
Fairness-Utility Trade-Offs [0.0]
In prediction-based decision-making systems, different perspectives can be at odds.
The short-term business goals of the decision makers are often in conflict with the decision subjects' wish to be treated fairly.
We propose a framework to make these value-laden choices clearly visible.
arXiv Detail & Related papers (2022-06-06T20:31:55Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Legal perspective on possible fairness measures - A legal discussion
using the example of hiring decisions (preprint) [0.0]
We explain the different kinds of fairness concepts that might be applicable for the specific application of hiring decisions.
We analyze their pros and cons with regard to the respective fairness interpretation and evaluate them from a legal perspective.
arXiv Detail & Related papers (2021-08-16T06:41:39Z) - ILDC for CJPE: Indian Legal Documents Corpus for Court Judgment
Prediction and Explanation [3.285073688021526]
We propose the task of Court Judgment Prediction and Explanation (CJPE)
CJPE requires an automated system to predict an explainable outcome of a case.
Our best prediction model has an accuracy of 78% versus 94% for human legal experts.
arXiv Detail & Related papers (2021-05-28T03:07:32Z) - Decision Rule Elicitation for Domain Adaptation [93.02675868486932]
Human-in-the-loop machine learning is widely used in artificial intelligence (AI) to elicit labels from experts.
In this work, we allow experts to additionally produce decision rules describing their decision-making.
We show that decision rule elicitation improves domain adaptation of the algorithm and helps to propagate expert's knowledge to the AI model.
arXiv Detail & Related papers (2021-02-23T08:07:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.