Comparing and extending the use of defeasible argumentation with
quantitative data in real-world contexts
- URL: http://arxiv.org/abs/2206.13959v1
- Date: Tue, 28 Jun 2022 12:28:47 GMT
- Title: Comparing and extending the use of defeasible argumentation with
quantitative data in real-world contexts
- Authors: Lucas Rizzo and Luca Longo
- Abstract summary: A non-monotonic formalism is one that allows the retraction of previous conclusions or claims, from premises, in light of new evidence.
This study contributes to the body of knowledge through the exploitation of defeasible argumentation and its comparison to similar approaches.
- Score: 5.482532589225552
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dealing with uncertain, contradicting, and ambiguous information is still a
central issue in Artificial Intelligence (AI). As a result, many formalisms
have been proposed or adapted so as to consider non-monotonicity, with only a
limited number of works and researchers performing any sort of comparison among
them. A non-monotonic formalism is one that allows the retraction of previous
conclusions or claims, from premises, in light of new evidence, offering some
desirable flexibility when dealing with uncertainty. This research article
focuses on evaluating the inferential capacity of defeasible argumentation, a
formalism particularly envisioned for modelling non-monotonic reasoning. In
addition to this, fuzzy reasoning and expert systems, extended for handling
non-monotonicity of reasoning, are selected and employed as baselines, due to
their vast and accepted use within the AI community. Computational trust was
selected as the domain of application of such models. Trust is an ill-defined
construct, hence, reasoning applied to the inference of trust can be seen as
non-monotonic. Inference models were designed to assign trust scalars to
editors of the Wikipedia project. In particular, argument-based models
demonstrated more robustness than those built upon the baselines despite the
knowledge bases or datasets employed. This study contributes to the body of
knowledge through the exploitation of defeasible argumentation and its
comparison to similar approaches. The practical use of such approaches coupled
with a modular design that facilitates similar experiments was exemplified and
their respective implementations made publicly available on GitHub [120, 121].
This work adds to previous works, empirically enhancing the generalisability of
defeasible argumentation as a compelling approach to reason with quantitative
data and uncertain knowledge.
Related papers
- Reliability and Interpretability in Science and Deep Learning [0.0]
This article focuses on the comparison between traditional scientific models and Deep Neural Network (DNN) models.
It argues that the high complexity of DNN models hinders the estimate of their reliability and also their prospect of long-term progress.
It also clarifies how interpretability is a precondition for assessing the reliability of any model, which cannot be based on statistical analysis alone.
arXiv Detail & Related papers (2024-01-14T20:14:07Z) - Faithful Model Explanations through Energy-Constrained Conformal
Counterfactuals [16.67633872254042]
Counterfactual explanations offer an intuitive and straightforward way to explain black-box models.
Existing work has primarily relied on surrogate models to learn how the input data is distributed.
We propose a novel algorithmic framework for generating Energy-Constrained Conformal Counterfactuals that are only as plausible as the model permits.
arXiv Detail & Related papers (2023-12-17T08:24:44Z) - Prototype-based Aleatoric Uncertainty Quantification for Cross-modal
Retrieval [139.21955930418815]
Cross-modal Retrieval methods build similarity relations between vision and language modalities by jointly learning a common representation space.
However, the predictions are often unreliable due to the Aleatoric uncertainty, which is induced by low-quality data, e.g., corrupt images, fast-paced videos, and non-detailed texts.
We propose a novel Prototype-based Aleatoric Uncertainty Quantification (PAU) framework to provide trustworthy predictions by quantifying the uncertainty arisen from the inherent data ambiguity.
arXiv Detail & Related papers (2023-09-29T09:41:19Z) - Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Learning a Structural Causal Model for Intuition Reasoning in
Conversation [20.243323155177766]
Reasoning, a crucial aspect of NLP research, has not been adequately addressed by prevailing models.
We develop a conversation cognitive model ( CCM) that explains how each utterance receives and activates channels of information.
By leveraging variational inference, it explores substitutes for implicit causes, addresses the issue of their unobservability, and reconstructs the causal representations of utterances through the evidence lower bounds.
arXiv Detail & Related papers (2023-05-28T13:54:09Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Principled Knowledge Extrapolation with GANs [92.62635018136476]
We study counterfactual synthesis from a new perspective of knowledge extrapolation.
We show that an adversarial game with a closed-form discriminator can be used to address the knowledge extrapolation problem.
Our method enjoys both elegant theoretical guarantees and superior performance in many scenarios.
arXiv Detail & Related papers (2022-05-21T08:39:42Z) - Instance-Based Neural Dependency Parsing [56.63500180843504]
We develop neural models that possess an interpretable inference process for dependency parsing.
Our models adopt instance-based inference, where dependency edges are extracted and labeled by comparing them to edges in a training set.
arXiv Detail & Related papers (2021-09-28T05:30:52Z) - Plausible Counterfactuals: Auditing Deep Learning Classifiers with
Realistic Adversarial Examples [84.8370546614042]
Black-box nature of Deep Learning models has posed unanswered questions about what they learn from data.
Generative Adversarial Network (GAN) and multi-objectives are used to furnish a plausible attack to the audited model.
Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
arXiv Detail & Related papers (2020-03-25T11:08:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.