On the Visualisation of Argumentation Graphs to Support Text
Interpretation
- URL: http://arxiv.org/abs/2303.03235v1
- Date: Mon, 6 Mar 2023 15:51:30 GMT
- Title: On the Visualisation of Argumentation Graphs to Support Text
Interpretation
- Authors: Hanadi Mardah, Oskar Wysocki, Markel Vigo and Andre Freitas
- Abstract summary: This study focuses on analysing the impact of argumentation graphs (AGs) compared with regular texts for supporting argument interpretation.
AGs were considered to deliver a more critical approach to argument interpretation, especially with unfamiliar topics.
- Score: 2.3226893628361682
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The recent evolution in Natural Language Processing (NLP) methods, in
particular in the field of argumentation mining, has the potential to transform
the way we interact with text, supporting the interpretation and analysis of
complex discourse and debates. Can a graphic visualisation of complex
argumentation enable a more critical interpretation of the arguments? This
study focuses on analysing the impact of argumentation graphs (AGs) compared
with regular texts for supporting argument interpretation. We found that AGs
outperformed the extrinsic metrics throughout most UEQ scales as well as the
NASA-TLX workload in all the terms but not in temporal or physical demand. The
AG model was liked by a more significant number of participants, despite the
fact that both the text-based and AG models yielded comparable outcomes in the
critical interpretation in terms of working memory and altering participants
decisions. The interpretation process involves reference to argumentation
schemes (linked to critical questions (CQs)) in AGs. Interestingly, we found
that the participants chose more CQs (using argument schemes in AGs) when they
were less familiar with the argument topics, making AG schemes on some scales
(relatively) supportive of the interpretation process. Therefore, AGs were
considered to deliver a more critical approach to argument interpretation,
especially with unfamiliar topics. Based on the 25 participants conducted in
this study, it appears that AG has demonstrated an overall positive effect on
the argument interpretation process.
Related papers
- Towards Comprehensive Argument Analysis in Education: Dataset, Tasks, and Method [14.718309497236694]
We propose 14 fine-grained relation types from both vertical and horizontal dimensions.<n>We conduct experiments on three tasks: argument component detection, relation prediction, and automated essay grading.<n>The findings highlight the importance of fine-grained argumentative annotations for argumentative writing quality assessment and encourage multi-dimensional argument analysis.
arXiv Detail & Related papers (2025-05-17T14:36:51Z) - Reasoning with Graphs: Structuring Implicit Knowledge to Enhance LLMs Reasoning [73.2950349728376]
Large language models (LLMs) have demonstrated remarkable success across a wide range of tasks.
However, they still encounter challenges in reasoning tasks that require understanding and inferring relationships between pieces of information.
This challenge is particularly pronounced in tasks involving multi-step processes, such as logical reasoning and multi-hop question answering.
We propose Reasoning with Graphs (RwG) by first constructing explicit graphs from the context.
arXiv Detail & Related papers (2025-01-14T05:18:20Z) - GRS-QA -- Graph Reasoning-Structured Question Answering Dataset [50.223851616680754]
We introduce the Graph Reasoning-Structured Question Answering dataset (GRS-QA), which includes both semantic contexts and reasoning structures for QA pairs.
Unlike existing M-QA datasets, GRS-QA explicitly captures intricate reasoning pathways by constructing reasoning graphs.
Our empirical analysis reveals that LLMs perform differently when handling questions with varying reasoning structures.
arXiv Detail & Related papers (2024-11-01T05:14:03Z) - Counterfactual and Semifactual Explanations in Abstract Argumentation: Formal Foundations, Complexity and Computation [19.799266797193344]
Argumentation-based systems often lack explainability while supporting decision-making processes.
Counterfactual and semifactual explanations are interpretability techniques.
We show that counterfactual and semifactual queries can be encoded in weak-constrained Argumentation Framework.
arXiv Detail & Related papers (2024-05-07T07:27:27Z) - Argue with Me Tersely: Towards Sentence-Level Counter-Argument
Generation [62.069374456021016]
We present the ArgTersely benchmark for sentence-level counter-argument generation.
We also propose Arg-LlaMA for generating high-quality counter-argument.
arXiv Detail & Related papers (2023-12-21T06:51:34Z) - Generation of Explanations for Logic Reasoning [0.0]
The research is centred on employing GPT-3.5-turbo to automate the analysis of fortiori arguments.
This thesis makes significant contributions to the fields of artificial intelligence and logical reasoning.
arXiv Detail & Related papers (2023-11-22T15:22:04Z) - DiPlomat: A Dialogue Dataset for Situated Pragmatic Reasoning [89.92601337474954]
Pragmatic reasoning plays a pivotal role in deciphering implicit meanings that frequently arise in real-life conversations.
We introduce a novel challenge, DiPlomat, aiming at benchmarking machines' capabilities on pragmatic reasoning and situated conversational understanding.
arXiv Detail & Related papers (2023-06-15T10:41:23Z) - Natural Language Decompositions of Implicit Content Enable Better Text
Representations [56.85319224208865]
We introduce a method for the analysis of text that takes implicitly communicated content explicitly into account.
We use a large language model to produce sets of propositions that are inferentially related to the text that has been observed.
Our results suggest that modeling the meanings behind observed language, rather than the literal text alone, is a valuable direction for NLP.
arXiv Detail & Related papers (2023-05-23T23:45:20Z) - The Legal Argument Reasoning Task in Civil Procedure [2.079168053329397]
We present a new NLP task and dataset from the domain of the U.S. civil procedure.
Each instance of the dataset consists of a general introduction to the case, a particular question, and a possible solution argument.
arXiv Detail & Related papers (2022-11-05T17:41:00Z) - Did the Cat Drink the Coffee? Challenging Transformers with Generalized
Event Knowledge [59.22170796793179]
Transformers Language Models (TLMs) were tested on a benchmark for the textitdynamic estimation of thematic fit
Our results show that TLMs can reach performances that are comparable to those achieved by SDM.
However, additional analysis consistently suggests that TLMs do not capture important aspects of event knowledge.
arXiv Detail & Related papers (2021-07-22T20:52:26Z) - Everything Has a Cause: Leveraging Causal Inference in Legal Text
Analysis [62.44432226563088]
Causal inference is the process of capturing cause-effect relationship among variables.
We propose a novel Graph-based Causal Inference framework, which builds causal graphs from fact descriptions without much human involvement.
We observe that the causal knowledge contained in GCI can be effectively injected into powerful neural networks for better performance and interpretability.
arXiv Detail & Related papers (2021-04-19T16:13:10Z) - Argument Mining Driven Analysis of Peer-Reviews [4.552676857046446]
We propose an Argument Mining based approach for the assistance of editors, meta-reviewers, and reviewers.
One of our findings is that arguments used in the peer-review process differ from arguments in other domains making the transfer of pre-trained models difficult.
We provide the community with a new peer-review dataset from different computer science conferences with annotated arguments.
arXiv Detail & Related papers (2020-12-10T16:06:21Z) - SRLGRN: Semantic Role Labeling Graph Reasoning Network [22.06211725256875]
This work deals with the challenge of learning and reasoning over multi-hop question answering (QA)
We propose a graph reasoning network based on the semantic structure of the sentences to learn cross paragraph reasoning paths.
Our proposed approach shows competitive performance on the HotpotQA distractor setting benchmark compared to the recent state-of-the-art models.
arXiv Detail & Related papers (2020-10-07T18:51:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.