Comparing and extending the use of defeasible argumentation with
quantitative data in real-world contexts
- URL: http://arxiv.org/abs/2206.13959v1
- Date: Tue, 28 Jun 2022 12:28:47 GMT
- Title: Comparing and extending the use of defeasible argumentation with
quantitative data in real-world contexts
- Authors: Lucas Rizzo and Luca Longo
- Abstract summary: A non-monotonic formalism is one that allows the retraction of previous conclusions or claims, from premises, in light of new evidence.
This study contributes to the body of knowledge through the exploitation of defeasible argumentation and its comparison to similar approaches.
- Score: 5.482532589225552
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dealing with uncertain, contradicting, and ambiguous information is still a
central issue in Artificial Intelligence (AI). As a result, many formalisms
have been proposed or adapted so as to consider non-monotonicity, with only a
limited number of works and researchers performing any sort of comparison among
them. A non-monotonic formalism is one that allows the retraction of previous
conclusions or claims, from premises, in light of new evidence, offering some
desirable flexibility when dealing with uncertainty. This research article
focuses on evaluating the inferential capacity of defeasible argumentation, a
formalism particularly envisioned for modelling non-monotonic reasoning. In
addition to this, fuzzy reasoning and expert systems, extended for handling
non-monotonicity of reasoning, are selected and employed as baselines, due to
their vast and accepted use within the AI community. Computational trust was
selected as the domain of application of such models. Trust is an ill-defined
construct, hence, reasoning applied to the inference of trust can be seen as
non-monotonic. Inference models were designed to assign trust scalars to
editors of the Wikipedia project. In particular, argument-based models
demonstrated more robustness than those built upon the baselines despite the
knowledge bases or datasets employed. This study contributes to the body of
knowledge through the exploitation of defeasible argumentation and its
comparison to similar approaches. The practical use of such approaches coupled
with a modular design that facilitates similar experiments was exemplified and
their respective implementations made publicly available on GitHub [120, 121].
This work adds to previous works, empirically enhancing the generalisability of
defeasible argumentation as a compelling approach to reason with quantitative
data and uncertain knowledge.
Related papers
- Atomic Reasoning for Scientific Table Claim Verification [83.14588611859826]
Non-experts are susceptible to misleading claims based on scientific tables due to their high information density and perceived credibility.<n>Existing table claim verification models, including state-of-the-art large language models (LLMs), often struggle with precise fine-grained reasoning.<n>Inspired by Cognitive Load Theory, we propose that enhancing a model's ability to interpret table-based claims involves reducing cognitive load.
arXiv Detail & Related papers (2025-06-08T02:46:22Z) - FactReasoner: A Probabilistic Approach to Long-Form Factuality Assessment for Large Language Models [59.171510592986735]
We propose FactReasoner, a new factuality assessor that relies on probabilistic reasoning to assess the factuality of a long-form generated response.
Our experiments on labeled and unlabeled benchmark datasets demonstrate clearly that FactReasoner improves considerably over state-of-the-art prompt-based approaches.
arXiv Detail & Related papers (2025-02-25T19:01:48Z) - Causality can systematically address the monsters under the bench(marks) [64.36592889550431]
Benchmarks are plagued by various biases, artifacts, or leakage.
Models may behave unreliably due to poorly explored failure modes.
causality offers an ideal framework to systematically address these challenges.
arXiv Detail & Related papers (2025-02-07T17:01:37Z) - The Foundations of Tokenization: Statistical and Computational Concerns [51.370165245628975]
Tokenization is a critical step in the NLP pipeline.
Despite its recognized importance as a standard representation method in NLP, the theoretical underpinnings of tokenization are not yet fully understood.
The present paper contributes to addressing this theoretical gap by proposing a unified formal framework for representing and analyzing tokenizer models.
arXiv Detail & Related papers (2024-07-16T11:12:28Z) - Reliability and Interpretability in Science and Deep Learning [0.0]
This article focuses on the comparison between traditional scientific models and Deep Neural Network (DNN) models.
It argues that the high complexity of DNN models hinders the estimate of their reliability and also their prospect of long-term progress.
It also clarifies how interpretability is a precondition for assessing the reliability of any model, which cannot be based on statistical analysis alone.
arXiv Detail & Related papers (2024-01-14T20:14:07Z) - Faithful Model Explanations through Energy-Constrained Conformal
Counterfactuals [16.67633872254042]
Counterfactual explanations offer an intuitive and straightforward way to explain black-box models.
Existing work has primarily relied on surrogate models to learn how the input data is distributed.
We propose a novel algorithmic framework for generating Energy-Constrained Conformal Counterfactuals that are only as plausible as the model permits.
arXiv Detail & Related papers (2023-12-17T08:24:44Z) - Prototype-based Aleatoric Uncertainty Quantification for Cross-modal
Retrieval [139.21955930418815]
Cross-modal Retrieval methods build similarity relations between vision and language modalities by jointly learning a common representation space.
However, the predictions are often unreliable due to the Aleatoric uncertainty, which is induced by low-quality data, e.g., corrupt images, fast-paced videos, and non-detailed texts.
We propose a novel Prototype-based Aleatoric Uncertainty Quantification (PAU) framework to provide trustworthy predictions by quantifying the uncertainty arisen from the inherent data ambiguity.
arXiv Detail & Related papers (2023-09-29T09:41:19Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Learning a Structural Causal Model for Intuition Reasoning in
Conversation [20.243323155177766]
Reasoning, a crucial aspect of NLP research, has not been adequately addressed by prevailing models.
We develop a conversation cognitive model ( CCM) that explains how each utterance receives and activates channels of information.
By leveraging variational inference, it explores substitutes for implicit causes, addresses the issue of their unobservability, and reconstructs the causal representations of utterances through the evidence lower bounds.
arXiv Detail & Related papers (2023-05-28T13:54:09Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Principled Knowledge Extrapolation with GANs [92.62635018136476]
We study counterfactual synthesis from a new perspective of knowledge extrapolation.
We show that an adversarial game with a closed-form discriminator can be used to address the knowledge extrapolation problem.
Our method enjoys both elegant theoretical guarantees and superior performance in many scenarios.
arXiv Detail & Related papers (2022-05-21T08:39:42Z) - Counterfactual Evaluation for Explainable AI [21.055319253405603]
We propose a new methodology to evaluate the faithfulness of explanations from the textitcounterfactual reasoning perspective.
We introduce two algorithms to find the proper counterfactuals in both discrete and continuous scenarios and then use the acquired counterfactuals to measure faithfulness.
arXiv Detail & Related papers (2021-09-05T01:38:49Z) - Plausible Counterfactuals: Auditing Deep Learning Classifiers with
Realistic Adversarial Examples [84.8370546614042]
Black-box nature of Deep Learning models has posed unanswered questions about what they learn from data.
Generative Adversarial Network (GAN) and multi-objectives are used to furnish a plausible attack to the audited model.
Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
arXiv Detail & Related papers (2020-03-25T11:08:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.