Annotating Implicit Reasoning in Arguments with Causal Links
- URL: http://arxiv.org/abs/2110.13692v1
- Date: Tue, 26 Oct 2021 13:28:53 GMT
- Title: Annotating Implicit Reasoning in Arguments with Causal Links
- Authors: Keshav Singh, Naoya Inoue, Farjana Sultana Mim, Shoichi Naitoh and
Kentaro Inui
- Abstract summary: We focus on identifying the implicit knowledge in the form of argumentation knowledge.
Being inspired by the Argument from Consequences scheme, we propose a semi-structured template to represent such argumentation knowledge.
We show how to collect and filter high-quality implicit reasonings via crowdsourcing.
- Score: 34.77514899468729
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most of the existing work that focus on the identification of implicit
knowledge in arguments generally represent implicit knowledge in the form of
commonsense or factual knowledge. However, such knowledge is not sufficient to
understand the implicit reasoning link between individual argumentative
components (i.e., claim and premise). In this work, we focus on identifying the
implicit knowledge in the form of argumentation knowledge which can help in
understanding the reasoning link in arguments. Being inspired by the Argument
from Consequences scheme, we propose a semi-structured template to represent
such argumentation knowledge that explicates the implicit reasoning in
arguments via causality. We create a novel two-phase annotation process with
simplified guidelines and show how to collect and filter high-quality implicit
reasonings via crowdsourcing. We find substantial inter-annotator agreement for
quality evaluation between experts, but find evidence that casts a few
questions on the feasibility of collecting high-quality semi-structured
implicit reasoning through our crowdsourcing process. We release our
materials(i.e., crowdsourcing guidelines and collected implicit reasonings) to
facilitate further research towards the structured representation of
argumentation knowledge.
Related papers
- CHECKWHY: Causal Fact Verification via Argument Structure [19.347690600431463]
CheckWhy is a dataset tailored to a novel causal fact verification task.
CheckWhy consists of over 19K "why" claim-evidence-argument structure triplets with supports, refutes, and not enough info labels.
arXiv Detail & Related papers (2024-08-20T15:03:35Z) - The Odyssey of Commonsense Causality: From Foundational Benchmarks to Cutting-Edge Reasoning [70.16523526957162]
Understanding commonsense causality helps people understand the principles of the real world better.
Despite its significance, a systematic exploration of this topic is notably lacking.
Our work aims to provide a systematic overview, update scholars on recent advancements, and provide a pragmatic guide for beginners.
arXiv Detail & Related papers (2024-06-27T16:30:50Z) - Counterfactual and Semifactual Explanations in Abstract Argumentation: Formal Foundations, Complexity and Computation [19.799266797193344]
Argumentation-based systems often lack explainability while supporting decision-making processes.
Counterfactual and semifactual explanations are interpretability techniques.
We show that counterfactual and semifactual queries can be encoded in weak-constrained Argumentation Framework.
arXiv Detail & Related papers (2024-05-07T07:27:27Z) - Generation of Explanations for Logic Reasoning [0.0]
The research is centred on employing GPT-3.5-turbo to automate the analysis of fortiori arguments.
This thesis makes significant contributions to the fields of artificial intelligence and logical reasoning.
arXiv Detail & Related papers (2023-11-22T15:22:04Z) - A Unifying Framework for Learning Argumentation Semantics [50.69905074548764]
We present a novel framework, which uses an Inductive Logic Programming approach to learn the acceptability semantics for several abstract and structured argumentation frameworks in an interpretable way.
Our framework outperforms existing argumentation solvers, thus opening up new future research directions in the area of formal argumentation and human-machine dialogues.
arXiv Detail & Related papers (2023-10-18T20:18:05Z) - Crystal: Introspective Reasoners Reinforced with Self-Feedback [118.53428015478957]
We propose a novel method to develop an introspective commonsense reasoner, Crystal.
To tackle commonsense problems, it first introspects for knowledge statements related to the given question, and subsequently makes an informed prediction that is grounded in the previously introspected knowledge.
Experiments show that Crystal significantly outperforms both the standard supervised finetuning and chain-of-thought distilled methods, and enhances the transparency of the commonsense reasoning process.
arXiv Detail & Related papers (2023-10-07T21:23:58Z) - Towards CausalGPT: A Multi-Agent Approach for Faithful Knowledge
Reasoning via Promoting Causal Consistency in LLMs [63.26541167737355]
We present a framework to increase faithfulness and causality for knowledge-based reasoning.
Our framework outperforms all compared state-of-the-art approaches by large margins.
arXiv Detail & Related papers (2023-08-23T04:59:21Z) - Contrastive Explanations for Argumentation-Based Conclusions [5.1398743023989555]
We discuss contrastive explanations for formal argumentation.
We show under which conditions contrastive explanations are meaningful, and how argumentation allows us to make implicit foils explicit.
arXiv Detail & Related papers (2021-07-07T15:00:47Z) - Fact-driven Logical Reasoning for Machine Reading Comprehension [82.58857437343974]
We are motivated to cover both commonsense and temporary knowledge clues hierarchically.
Specifically, we propose a general formalism of knowledge units by extracting backbone constituents of the sentence.
We then construct a supergraph on top of the fact units, allowing for the benefit of sentence-level (relations among fact groups) and entity-level interactions.
arXiv Detail & Related papers (2021-05-21T13:11:13Z) - Necessary and Sufficient Explanations in Abstract Argumentation [3.9849889653167208]
We discuss necessary and sufficient explanations for formal argumentation.
We study necessity and sufficiency: what (sets of) arguments are necessary or sufficient for the (non-acceptance) of an argument?
arXiv Detail & Related papers (2020-11-04T17:12:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.