End-to-End Argument Mining over Varying Rhetorical Structures
- URL: http://arxiv.org/abs/2401.11218v1
- Date: Sat, 20 Jan 2024 12:00:40 GMT
- Title: End-to-End Argument Mining over Varying Rhetorical Structures
- Authors: Elena Chistova
- Abstract summary: Rhetorical Structure Theory implies no single discourse interpretation of a text.
Same argumentative structure can be found in semantically similar texts with varying rhetorical structures.
- Score: 5.439020425819001
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rhetorical Structure Theory implies no single discourse interpretation of a
text, and the limitations of RST parsers further exacerbate inconsistent
parsing of similar structures. Therefore, it is important to take into account
that the same argumentative structure can be found in semantically similar
texts with varying rhetorical structures. In this work, the differences between
paraphrases within the same argument scheme are evaluated from a rhetorical
perspective. The study proposes a deep dependency parsing model to assess the
connection between rhetorical and argument structures. The model utilizes
rhetorical relations; RST structures of paraphrases serve as training data
augmentations. The method allows for end-to-end argumentation analysis using a
rhetorical tree instead of a word sequence. It is evaluated on the bilingual
Microtexts corpus, and the first results on fully-fledged argument parsing for
the Russian version of the corpus are reported. The results suggest that
argument mining can benefit from multiple variants of discourse structure.
Related papers
- "You Are An Expert Linguistic Annotator": Limits of LLMs as Analyzers of
Abstract Meaning Representation [60.863629647985526]
We examine the successes and limitations of the GPT-3, ChatGPT, and GPT-4 models in analysis of sentence meaning structure.
We find that models can reliably reproduce the basic format of AMR, and can often capture core event, argument, and modifier structure.
Overall, our findings indicate that these models out-of-the-box can capture aspects of semantic structure, but there remain key limitations in their ability to support fully accurate semantic analyses or parses.
arXiv Detail & Related papers (2023-10-26T21:47:59Z) - ParaAMR: A Large-Scale Syntactically Diverse Paraphrase Dataset by AMR
Back-Translation [59.91139600152296]
ParaAMR is a large-scale syntactically diverse paraphrase dataset created by abstract meaning representation back-translation.
We show that ParaAMR can be used to improve on three NLP tasks: learning sentence embeddings, syntactically controlled paraphrase generation, and data augmentation for few-shot learning.
arXiv Detail & Related papers (2023-05-26T02:27:33Z) - Structural Ambiguity and its Disambiguation in Language Model Based
Parsers: the Case of Dutch Clause Relativization [2.9950872478176627]
We study how the presence of a prior sentence can resolve relative clause ambiguities.
Results show that a neurosymbolic, based on proof nets, is more open to data bias correction than an approach based on universal dependencies.
arXiv Detail & Related papers (2023-05-24T09:04:18Z) - Cascading and Direct Approaches to Unsupervised Constituency Parsing on
Spoken Sentences [67.37544997614646]
We present the first study on unsupervised spoken constituency parsing.
The goal is to determine the spoken sentences' hierarchical syntactic structure in the form of constituency parse trees.
We show that accurate segmentation alone may be sufficient to parse spoken sentences accurately.
arXiv Detail & Related papers (2023-03-15T17:57:22Z) - Discourse Analysis via Questions and Answers: Parsing Dependency
Structures of Questions Under Discussion [57.43781399856913]
This work adopts the linguistic framework of Questions Under Discussion (QUD) for discourse analysis.
We characterize relationships between sentences as free-form questions, in contrast to exhaustive fine-grained questions.
We develop the first-of-its-kind QUD that derives a dependency structure of questions over full documents.
arXiv Detail & Related papers (2022-10-12T03:53:12Z) - RuArg-2022: Argument Mining Evaluation [69.87149207721035]
This paper is a report of the organizers on the first competition of argumentation analysis systems dealing with Russian language texts.
A corpus containing 9,550 sentences (comments on social media posts) on three topics related to the COVID-19 pandemic was prepared.
The system that won the first place in both tasks used the NLI (Natural Language Inference) variant of the BERT architecture.
arXiv Detail & Related papers (2022-06-18T17:13:37Z) - Exploring Discourse Structures for Argument Impact Classification [48.909640432326654]
This paper empirically shows that the discourse relations between two arguments along the context path are essential factors for identifying the persuasive power of an argument.
We propose DisCOC to inject and fuse the sentence-level structural information with contextualized features derived from large-scale language models.
arXiv Detail & Related papers (2021-06-02T06:49:19Z) - Context-Preserving Text Simplification [11.830061911323025]
We present a context-preserving text simplification (TS) approach that splits and rephrases complex English sentences into a semantic hierarchy of simplified sentences.
Using a set of linguistically principled transformation patterns, input sentences are converted into a hierarchical representation in the form of core sentences and accompanying contexts that are linked via rhetorical relations.
A comparative analysis with the annotations contained in the RST-DT shows that we are able to capture the contextual hierarchy between the split sentences with a precision of 89% and reach an average precision of 69% for the classification of the rhetorical relations that hold between them.
arXiv Detail & Related papers (2021-05-24T09:54:56Z) - Extracting Implicitly Asserted Propositions in Argumentation [8.20413690846954]
We study methods for extracting propositions implicitly asserted in questions, reported speech, and imperatives in argumentation.
Our study may inform future research on argument mining and the semantics of these rhetorical devices in argumentation.
arXiv Detail & Related papers (2020-10-06T12:03:47Z) - AMPERSAND: Argument Mining for PERSuAsive oNline Discussions [41.06165177604387]
We propose a computational model for argument mining in online persuasive discussion forums.
Our approach relies on identifying relations between components of arguments in a discussion thread.
Our models obtain significant improvements compared to recent state-of-the-art approaches.
arXiv Detail & Related papers (2020-04-30T10:33:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.