Case-Based Reasoning with Language Models for Classification of Logical
Fallacies
- URL: http://arxiv.org/abs/2301.11879v2
- Date: Wed, 17 May 2023 20:13:59 GMT
- Title: Case-Based Reasoning with Language Models for Classification of Logical
Fallacies
- Authors: Zhivar Sourati, Filip Ilievski, H\^ong-\^An Sandlin, Alain Mermoud
- Abstract summary: We propose a Case-Based Reasoning method that classifies new cases of logical fallacy.
Our experiments indicate that Case-Based Reasoning improves the accuracy and generalizability of language models.
- Score: 3.511369967593153
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ease and speed of spreading misinformation and propaganda on the Web
motivate the need to develop trustworthy technology for detecting fallacies in
natural language arguments. However, state-of-the-art language modeling methods
exhibit a lack of robustness on tasks like logical fallacy classification that
require complex reasoning. In this paper, we propose a Case-Based Reasoning
method that classifies new cases of logical fallacy by language-modeling-driven
retrieval and adaptation of historical cases. We design four complementary
strategies to enrich input representation for our model, based on external
information about goals, explanations, counterarguments, and argument
structure. Our experiments in in-domain and out-of-domain settings indicate
that Case-Based Reasoning improves the accuracy and generalizability of
language models. Our ablation studies suggest that representations of similar
cases have a strong impact on the model performance, that models perform well
with fewer retrieved cases, and that the size of the case database has a
negligible effect on the performance. Finally, we dive deeper into the
relationship between the properties of the retrieved cases and the model
performance.
Related papers
- Reasoning Elicitation in Language Models via Counterfactual Feedback [17.908819732623716]
We derive novel metrics that balance accuracy in factual and counterfactual questions.
We propose several fine-tuning approaches that aim to elicit better reasoning mechanisms.
We evaluate the performance of the fine-tuned language models in a variety of realistic scenarios.
arXiv Detail & Related papers (2024-10-02T15:33:30Z) - On the Tip of the Tongue: Analyzing Conceptual Representation in Large
Language Models with Reverse-Dictionary Probe [36.65834065044746]
We use in-context learning to guide the models to generate the term for an object concept implied in a linguistic description.
Experiments suggest that conceptual inference ability as probed by the reverse-dictionary task predicts model's general reasoning performance.
arXiv Detail & Related papers (2024-02-22T09:45:26Z) - Large Language Models are Few-Shot Training Example Generators: A Case Study in Fallacy Recognition [49.38757847011105]
computational fallacy recognition faces challenges due to diverse genres, domains, and types of fallacies found in datasets.
We aim to enhance existing models for fallacy recognition by incorporating additional context and by leveraging large language models to generate synthetic data.
Our evaluation results demonstrate consistent improvements across fallacy types, datasets, and generators.
arXiv Detail & Related papers (2023-11-16T04:17:47Z) - Enhancing Argument Structure Extraction with Efficient Leverage of
Contextual Information [79.06082391992545]
We propose an Efficient Context-aware model (ECASE) that fully exploits contextual information.
We introduce a sequence-attention module and distance-weighted similarity loss to aggregate contextual information and argumentative information.
Our experiments on five datasets from various domains demonstrate that our model achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-10-08T08:47:10Z) - Large Language Models as Analogical Reasoners [155.9617224350088]
Chain-of-thought (CoT) prompting for language models demonstrates impressive performance across reasoning tasks.
We introduce a new prompting approach, analogical prompting, designed to automatically guide the reasoning process of large language models.
arXiv Detail & Related papers (2023-10-03T00:57:26Z) - Estimating the Causal Effects of Natural Logic Features in Neural NLI
Models [2.363388546004777]
We zone in on specific patterns of reasoning with enough structure and regularity to be able to identify and quantify systematic reasoning failures in widely-used models.
We apply causal effect estimation strategies to measure the effect of context interventions.
Following related work on causal analysis of NLP models in different settings, we adapt the methodology for the NLI task to construct comparative model profiles.
arXiv Detail & Related papers (2023-05-15T12:01:09Z) - A Causal Framework to Quantify the Robustness of Mathematical Reasoning
with Language Models [81.15974174627785]
We study the behavior of language models in terms of robustness and sensitivity to direct interventions in the input space.
Our analysis shows that robustness does not appear to continuously improve as a function of size, but the GPT-3 Davinci models (175B) achieve a dramatic improvement in both robustness and sensitivity compared to all other GPT variants.
arXiv Detail & Related papers (2022-10-21T15:12:37Z) - Generative Counterfactuals for Neural Networks via Attribute-Informed
Perturbation [51.29486247405601]
We design a framework to generate counterfactuals for raw data instances with the proposed Attribute-Informed Perturbation (AIP)
By utilizing generative models conditioned with different attributes, counterfactuals with desired labels can be obtained effectively and efficiently.
Experimental results on real-world texts and images demonstrate the effectiveness, sample quality as well as efficiency of our designed framework.
arXiv Detail & Related papers (2021-01-18T08:37:13Z) - On the Transferability of Adversarial Attacksagainst Neural Text
Classifier [121.6758865857686]
We investigate the transferability of adversarial examples for text classification models.
We propose a genetic algorithm to find an ensemble of models that can induce adversarial examples to fool almost all existing models.
We derive word replacement rules that can be used for model diagnostics from these adversarial examples.
arXiv Detail & Related papers (2020-11-17T10:45:05Z) - CausaLM: Causal Model Explanation Through Counterfactual Language Models [33.29636213961804]
CausaLM is a framework for producing causal model explanations using counterfactual language representation models.
We show that language representation models such as BERT can effectively learn a counterfactual representation for a given concept of interest.
A byproduct of our method is a language representation model that is unaffected by the tested concept.
arXiv Detail & Related papers (2020-05-27T15:06:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.