Modelling and Explaining Legal Case-based Reasoners through Classifiers
- URL: http://arxiv.org/abs/2210.11217v1
- Date: Thu, 20 Oct 2022 12:51:12 GMT
- Title: Modelling and Explaining Legal Case-based Reasoners through Classifiers
- Authors: Xinghan Liu, Emiliano Lorini, Antonino Rotolo, Giovanni Sartor
- Abstract summary: This paper brings together two lines of research: factor-based models of case-based reasoning (CBR) and the logical specification of classifiers.
- Score: 9.052343834206992
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper brings together two lines of research: factor-based models of
case-based reasoning (CBR) and the logical specification of classifiers.
Logical approaches to classifiers capture the connection between features and
outcomes in classifier systems. Factor-based reasoning is a popular approach to
reasoning by precedent in AI & Law. Horty (2011) has developed the factor-based
models of precedent into a theory of precedential constraint. In this paper we
combine the modal logic approach (binary-input classifier, BLC) to classifiers
and their explanations given by Liu & Lorini (2021) with Horty's account of
factor-based CBR, since both a classifier and CBR map sets of features to
decisions or classifications. We reformulate case bases of Horty in the
language of BCL, and give several representation results. Furthermore, we show
how notions of CBR, e.g. reason, preference between reasons, can be analyzed by
notions of classifier system.
Related papers
- Rule-based Classifier Models [0.4915744683251149]
This paper presents an initial approach to incorporating sets of rules within a classifier.
We demonstrate how decisions for new cases can be inferred using this enriched rule-based framework.
arXiv Detail & Related papers (2025-05-01T11:59:16Z) - Distributional Associations vs In-Context Reasoning: A Study of Feed-forward and Attention Layers [49.80959223722325]
We study the distinction between feed-forward and attention layers in large language models.
We find that feed-forward layers tend to learn simple distributional associations such as bigrams, while attention layers focus on in-context reasoning.
arXiv Detail & Related papers (2024-06-05T08:51:08Z) - Algorithmic syntactic causal identification [0.8901073744693314]
Causal identification in causal Bayes nets (CBNs) is an important tool in causal inference.
Most existing formulations of causal identification using techniques such as d-separation and do-calculus are expressed within the mathematical language of classical probability theory.
We show that this restriction can be lifted by replacing the use of classical probability theory with the alternative axiomatic foundation of symmetric monoidal categories.
arXiv Detail & Related papers (2024-03-14T17:14:53Z) - Modeling Hierarchical Reasoning Chains by Linking Discourse Units and
Key Phrases for Reading Comprehension [80.99865844249106]
We propose a holistic graph network (HGN) which deals with context at both discourse level and word level, as the basis for logical reasoning.
Specifically, node-level and type-level relations, which can be interpreted as bridges in the reasoning process, are modeled by a hierarchical interaction mechanism.
arXiv Detail & Related papers (2023-06-21T07:34:27Z) - Machine Reading Comprehension using Case-based Reasoning [92.51061570746077]
We present an accurate and interpretable method for answer extraction in machine reading comprehension.
Our method builds upon the hypothesis that contextualized answers to similar questions share semantic similarities with each other.
arXiv Detail & Related papers (2023-05-24T07:09:56Z) - APOLLO: A Simple Approach for Adaptive Pretraining of Language Models
for Logical Reasoning [73.3035118224719]
We propose APOLLO, an adaptively pretrained language model that has improved logical reasoning abilities.
APOLLO performs comparably on ReClor and outperforms baselines on LogiQA.
arXiv Detail & Related papers (2022-12-19T07:40:02Z) - Feature Necessity & Relevancy in ML Classifier Explanations [5.232306238197686]
Given a machine learning (ML) model and a prediction, explanations can be defined as sets of features which are sufficient for the prediction.
It is also critical to understand whether sensitive features can occur in some explanation, or whether a non-interesting feature must occur in all explanations.
arXiv Detail & Related papers (2022-10-27T12:12:45Z) - Perturbations and Subpopulations for Testing Robustness in Token-Based
Argument Unit Recognition [6.502694770864571]
Argument Unit Recognition and Classification aims at identifying argument units from text and classifying them as pro or against.
One of the design choices that need to be made when developing systems for this task is what the unit of classification should be: segments of tokens or full sentences.
Previous research suggests that fine-tuning language models on the token-level yields more robust results for classifying sentences compared to training on sentences directly.
We reproduce the study that originally made this claim and further investigate what exactly token-based systems learned better compared to sentence-based ones.
arXiv Detail & Related papers (2022-09-29T13:44:28Z) - Argumentative Explanations for Pattern-Based Text Classifiers [15.81939090849456]
We focus on explanations for a specific interpretable model, namely pattern-based logistic regression (PLR) for binary text classification.
We propose AXPLR, a novel explanation method using (forms of) computational argumentation to generate explanations.
arXiv Detail & Related papers (2022-05-22T21:16:49Z) - Knowledge Base Question Answering by Case-based Reasoning over Subgraphs [81.22050011503933]
We show that our model answers queries requiring complex reasoning patterns more effectively than existing KG completion algorithms.
The proposed model outperforms or performs competitively with state-of-the-art models on several KBQA benchmarks.
arXiv Detail & Related papers (2022-02-22T01:34:35Z) - A Formalisation of Abstract Argumentation in Higher-Order Logic [77.34726150561087]
We present an approach for representing abstract argumentation frameworks based on an encoding into classical higher-order logic.
This provides a uniform framework for computer-assisted assessment of abstract argumentation frameworks using interactive and automated reasoning tools.
arXiv Detail & Related papers (2021-10-18T10:45:59Z) - Pairwise Supervision Can Provably Elicit a Decision Boundary [84.58020117487898]
Similarity learning is a problem to elicit useful representations by predicting the relationship between a pair of patterns.
We show that similarity learning is capable of solving binary classification by directly eliciting a decision boundary.
arXiv Detail & Related papers (2020-06-11T05:35:16Z) - An ASP-Based Approach to Counterfactual Explanations for Classification [0.0]
We propose answer-set programs that specify and compute counterfactual interventions as a basis for causality-based explanations to decisions produced by classification models.
They can be applied with black-box models and models that can be specified as logic programs, such as rule-based classifiers.
arXiv Detail & Related papers (2020-04-28T01:36:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.