Mind the Gap: Automated Corpus Creation for Enthymeme Detection and
Reconstruction in Learner Arguments
- URL: http://arxiv.org/abs/2310.18098v1
- Date: Fri, 27 Oct 2023 12:33:40 GMT
- Title: Mind the Gap: Automated Corpus Creation for Enthymeme Detection and
Reconstruction in Learner Arguments
- Authors: Maja Stahl, Nick D\"usterhus, Mei-Hua Chen, Henning Wachsmuth
- Abstract summary: This paper introduces two new tasks for learner arguments: to identify gaps in arguments and to fill such gaps.
Based on the ICLEv3 corpus of argumentative learner essays, we create 40,089 argument instances for enthymeme detection and reconstruction.
- Score: 15.184644294253848
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Writing strong arguments can be challenging for learners. It requires to
select and arrange multiple argumentative discourse units (ADUs) in a logical
and coherent way as well as to decide which ADUs to leave implicit, so called
enthymemes. However, when important ADUs are missing, readers might not be able
to follow the reasoning or understand the argument's main point. This paper
introduces two new tasks for learner arguments: to identify gaps in arguments
(enthymeme detection) and to fill such gaps (enthymeme reconstruction).
Approaches to both tasks may help learners improve their argument quality. We
study how corpora for these tasks can be created automatically by deleting ADUs
from an argumentative text that are central to the argument and its quality,
while maintaining the text's naturalness. Based on the ICLEv3 corpus of
argumentative learner essays, we create 40,089 argument instances for enthymeme
detection and reconstruction. Through manual studies, we provide evidence that
the proposed corpus creation process leads to the desired quality reduction,
and results in arguments that are similarly natural to those written by
learners. Finally, first baseline approaches to enthymeme detection and
reconstruction demonstrate the corpus' usefulness.
Related papers
- An Incomplete Loop: Deductive, Inductive, and Abductive Learning in Large Language Models [99.31449616860291]
Modern language models (LMs) can learn to perform new tasks in different ways.
In instruction following, the target task is described explicitly in natural language; in few-shot prompting, the task is specified implicitly.
In instruction inference, LMs are presented with in-context examples and are then prompted to generate a natural language task description.
arXiv Detail & Related papers (2024-04-03T19:31:56Z) - A School Student Essay Corpus for Analyzing Interactions of Argumentative Structure and Quality [12.187586364960758]
We present a German corpus of 1,320 essays from school students of two age groups.
Each essay has been manually annotated for argumentative structure and quality on multiple levels of granularity.
We propose baseline approaches to argument mining and essay scoring, and we analyze interactions between both tasks.
arXiv Detail & Related papers (2024-04-03T07:31:53Z) - Exploring Jiu-Jitsu Argumentation for Writing Peer Review Rebuttals [70.22179850619519]
In many domains of argumentation, people's arguments are driven by so-called attitude roots.
Recent work in psychology suggests that instead of directly countering surface-level reasoning, one should follow an argumentation style inspired by the Jiu-Jitsu'soft' combat system.
We are the first to explore Jiu-Jitsu argumentation for peer review by proposing the novel task of attitude and theme-guided rebuttal generation.
arXiv Detail & Related papers (2023-11-07T13:54:01Z) - AMERICANO: Argument Generation with Discourse-driven Decomposition and Agent Interaction [25.38899822861742]
We propose Americano, a novel framework with agent interaction for argument generation.
Our approach decomposes the generation process into sequential actions grounded on argumentation theory.
Our method outperforms both end-to-end and chain-of-thought prompting methods.
arXiv Detail & Related papers (2023-10-31T10:47:33Z) - A Unifying Framework for Learning Argumentation Semantics [50.69905074548764]
We present a novel framework, which uses an Inductive Logic Programming approach to learn the acceptability semantics for several abstract and structured argumentation frameworks in an interpretable way.
Our framework outperforms existing argumentation solvers, thus opening up new future research directions in the area of formal argumentation and human-machine dialogues.
arXiv Detail & Related papers (2023-10-18T20:18:05Z) - ArgU: A Controllable Factual Argument Generator [0.0]
ArgU is a neural argument generator capable of producing factual arguments from input facts and real-world concepts.
We have compiled and released an annotated corpora of 69,428 arguments spanning six topics and six argument schemes.
arXiv Detail & Related papers (2023-05-09T10:49:45Z) - MURMUR: Modular Multi-Step Reasoning for Semi-Structured Data-to-Text
Generation [102.20036684996248]
We propose MURMUR, a neuro-symbolic modular approach to text generation from semi-structured data with multi-step reasoning.
We conduct experiments on two data-to-text generation tasks like WebNLG and LogicNLG.
arXiv Detail & Related papers (2022-12-16T17:36:23Z) - Generating Informative Conclusions for Argumentative Texts [32.3103908466811]
The purpose of an argumentative text is to support a certain conclusion.
An explicit conclusion makes for a good candidate summary of an argumentative text.
This is especially true if the conclusion is informative, emphasizing specific concepts from the text.
arXiv Detail & Related papers (2021-06-02T10:35:59Z) - Discern: Discourse-Aware Entailment Reasoning Network for Conversational
Machine Reading [157.14821839576678]
Discern is a discourse-aware entailment reasoning network to strengthen the connection and enhance the understanding for both document and dialog.
Our experiments show that Discern achieves state-of-the-art results of 78.3% macro-averaged accuracy on decision making and 64.0 BLEU1 on follow-up question generation.
arXiv Detail & Related papers (2020-10-05T07:49:51Z) - Critical Thinking for Language Models [6.963299759354333]
This paper takes a first step towards a critical thinking curriculum for neural auto-regressive language models.
We generate artificial argumentative texts to train and evaluate GPT-2.
We obtain consistent and promising results for NLU benchmarks.
arXiv Detail & Related papers (2020-09-15T15:49:19Z) - Aspect-Controlled Neural Argument Generation [65.91772010586605]
We train a language model for argument generation that can be controlled on a fine-grained level to generate sentence-level arguments for a given topic, stance, and aspect.
Our evaluation shows that our generation model is able to generate high-quality, aspect-specific arguments.
These arguments can be used to improve the performance of stance detection models via data augmentation and to generate counter-arguments.
arXiv Detail & Related papers (2020-04-30T20:17:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.