Generative Interpretation
- URL: http://arxiv.org/abs/2308.06907v1
- Date: Mon, 14 Aug 2023 02:59:27 GMT
- Title: Generative Interpretation
- Authors: Yonathan A. Arbel and David Hoffman
- Abstract summary: We introduce generative interpretation, a new approach to estimating contractual meaning using large language models.
We show that AI models can help factfinders ascertain ordinary meaning in context, quantify ambiguity, and fill gaps in parties' agreements.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce generative interpretation, a new approach to estimating
contractual meaning using large language models. As AI triumphalism is the
order of the day, we proceed by way of grounded case studies, each illustrating
the capabilities of these novel tools in distinct ways. Taking well-known
contracts opinions, and sourcing the actual agreements that they adjudicated,
we show that AI models can help factfinders ascertain ordinary meaning in
context, quantify ambiguity, and fill gaps in parties' agreements. We also
illustrate how models can calculate the probative value of individual pieces of
extrinsic evidence. After offering best practices for the use of these models
given their limitations, we consider their implications for judicial practice
and contract theory. Using LLMs permits courts to estimate what the parties
intended cheaply and accurately, and as such generative interpretation
unsettles the current interpretative stalemate. Their use responds to
efficiency-minded textualists and justice-oriented contextualists, who argue
about whether parties will prefer cost and certainty or accuracy and fairness.
Parties--and courts--would prefer a middle path, in which adjudicators strive
to predict what the contract really meant, admitting just enough context to
approximate reality while avoiding unguided and biased assimilation of
evidence. As generative interpretation offers this possibility, we argue it can
become the new workhorse of contractual interpretation.
Related papers
- DELTA: Pre-train a Discriminative Encoder for Legal Case Retrieval via Structural Word Alignment [55.91429725404988]
We introduce DELTA, a discriminative model designed for legal case retrieval.
We leverage shallow decoders to create information bottlenecks, aiming to enhance the representation ability.
Our approach can outperform existing state-of-the-art methods in legal case retrieval.
arXiv Detail & Related papers (2024-03-27T10:40:14Z) - Towards Explainability in Legal Outcome Prediction Models [64.00172507827499]
We argue that precedent is a natural way of facilitating explainability for legal NLP models.
By developing a taxonomy of legal precedent, we are able to compare human judges and neural models.
We find that while the models learn to predict outcomes reasonably well, their use of precedent is unlike that of human judges.
arXiv Detail & Related papers (2024-03-25T15:15:41Z) - Towards Non-Adversarial Algorithmic Recourse [20.819764720587646]
It has been argued that adversarial examples, as opposed to counterfactual explanations, have a unique characteristic in that they lead to a misclassification compared to the ground truth.
We introduce non-adversarial algorithmic recourse and outline why in high-stakes situations, it is imperative to obtain counterfactual explanations that do not exhibit adversarial characteristics.
arXiv Detail & Related papers (2024-03-15T14:18:21Z) - Wigner and friends, a map is not the territory! Contextuality in multi-agent paradoxes [0.0]
Multi-agent scenarios can show contradictory results when a non-classical formalism must deal with the knowledge between agents.
Even if knowledge is treated in a relational way with the concept of trust, contradictory results can still be found in multi-agent scenarios.
arXiv Detail & Related papers (2023-05-12T22:51:13Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Discovering Explanatory Sentences in Legal Case Decisions Using
Pre-trained Language Models [0.7614628596146599]
Legal texts routinely use concepts that are difficult to understand.
Lawyers elaborate on the meaning of such concepts by, among other things, carefully investigating how have they been used in past.
Finding text snippets that mention a particular concept in a useful way is tedious, time-consuming, and, hence, expensive.
arXiv Detail & Related papers (2021-12-14T04:56:39Z) - Detecting Logical Relation In Contract Clauses [94.85352502638081]
We develop an approach to automate the extraction of logical relations between clauses in a contract.
The resulting approach should help contract authors detecting potential logical conflicts between clauses.
arXiv Detail & Related papers (2021-11-02T19:26:32Z) - ContractNLI: A Dataset for Document-level Natural Language Inference for
Contracts [39.75232199445175]
We propose "document-level natural language inference (NLI) for contracts"
A system is given a set of hypotheses and a contract, and it is asked to classify whether each hypothesis is "entailed by", "contradicting to" or "not mentioned by" (neutral to) the contract.
We release the largest corpus to date consisting of 607 annotated contracts.
arXiv Detail & Related papers (2021-10-05T03:22:31Z) - Prompting Contrastive Explanations for Commonsense Reasoning Tasks [74.7346558082693]
Large pretrained language models (PLMs) can achieve near-human performance on commonsense reasoning tasks.
We show how to use these same models to generate human-interpretable evidence.
arXiv Detail & Related papers (2021-06-12T17:06:13Z) - XAI Handbook: Towards a Unified Framework for Explainable AI [5.716475756970092]
The field of explainable AI (XAI) has quickly become a thriving and prolific community.
Each new contribution seems to rely on its own (and often intuitive) version of terms like "explanation" and "interpretation"
We propose a theoretical framework that not only provides concrete definitions for these terms, but it also outlines all steps necessary to produce explanations and interpretations.
arXiv Detail & Related papers (2021-05-14T07:28:21Z) - Are Interpretations Fairly Evaluated? A Definition Driven Pipeline for
Post-Hoc Interpretability [54.85658598523915]
We propose to have a concrete definition of interpretation before we could evaluate faithfulness of an interpretation.
We find that although interpretation methods perform differently under a certain evaluation metric, such a difference may not result from interpretation quality or faithfulness.
arXiv Detail & Related papers (2020-09-16T06:38:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.