Argumentative Text Generation in Economic Domain
- URL: http://arxiv.org/abs/2206.09251v1
- Date: Sat, 18 Jun 2022 17:22:06 GMT
- Title: Argumentative Text Generation in Economic Domain
- Authors: Irina Fishcheva, Dmitriy Osadchiy, Klavdiya Bochenina, Evgeny
Kotelnikov
- Abstract summary: Key problem of the argument text generation for the Russian language is the lack of annotated argumentation corpora.
In this paper, we use translated versions of the Argumentative Microtext, Persuasive Essays and UKP Sentential corpora to fine-tune RuBERT model.
The results show that this approach improves the accuracy of the argument generation by more than 20 percentage points compared to the original ruGPT-3 model.
- Score: 0.11470070927586015
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The development of large and super-large language models, such as GPT-3, T5,
Switch Transformer, ERNIE, etc., has significantly improved the performance of
text generation. One of the important research directions in this area is the
generation of texts with arguments. The solution of this problem can be used in
business meetings, political debates, dialogue systems, for preparation of
student essays. One of the main domains for these applications is the economic
sphere. The key problem of the argument text generation for the Russian
language is the lack of annotated argumentation corpora. In this paper, we use
translated versions of the Argumentative Microtext, Persuasive Essays and UKP
Sentential corpora to fine-tune RuBERT model. Further, this model is used to
annotate the corpus of economic news by argumentation. Then the annotated
corpus is employed to fine-tune the ruGPT-3 model, which generates argument
texts. The results show that this approach improves the accuracy of the
argument generation by more than 20 percentage points (63.2\% vs. 42.5\%)
compared to the original ruGPT-3 model.
Related papers
- Argue with Me Tersely: Towards Sentence-Level Counter-Argument
Generation [62.069374456021016]
We present the ArgTersely benchmark for sentence-level counter-argument generation.
We also propose Arg-LlaMA for generating high-quality counter-argument.
arXiv Detail & Related papers (2023-12-21T06:51:34Z) - AI, write an essay for me: A large-scale comparison of human-written
versus ChatGPT-generated essays [66.36541161082856]
ChatGPT and similar generative AI models have attracted hundreds of millions of users.
This study compares human-written versus ChatGPT-generated argumentative student essays.
arXiv Detail & Related papers (2023-04-24T12:58:28Z) - ArguGPT: evaluating, understanding and identifying argumentative essays
generated by GPT models [9.483206389157509]
We first present ArguGPT, a balanced corpus of 4,038 argumentative essays generated by 7 GPT models.
We then hire English instructors to distinguish machine essays from human ones.
Results show that when first exposed to machine-generated essays, the instructors only have an accuracy of 61% in detecting them.
arXiv Detail & Related papers (2023-04-16T01:50:26Z) - Elaboration-Generating Commonsense Question Answering at Scale [77.96137534751445]
In question answering requiring common sense, language models (e.g., GPT-3) have been used to generate text expressing background knowledge.
We finetune smaller language models to generate useful intermediate context, referred to here as elaborations.
Our framework alternates between updating two language models -- an elaboration generator and an answer predictor -- allowing each to influence the other.
arXiv Detail & Related papers (2022-09-02T18:32:09Z) - Automatic Summarization of Russian Texts: Comparison of Extractive and
Abstractive Methods [0.0]
Key problem of the argument text generation for the Russian language is the lack of annotated argumentation corpora.
In this paper, we use translated versions of the Argumentative Microtext, Persuasive Essays and UKP Sentential corpora to fine-tune RuBERT model.
The results show that this approach improves the accuracy of the argument generation by more than 20 percentage points compared to the original ruGPT-3 model.
arXiv Detail & Related papers (2022-06-18T17:28:04Z) - RuArg-2022: Argument Mining Evaluation [69.87149207721035]
This paper is a report of the organizers on the first competition of argumentation analysis systems dealing with Russian language texts.
A corpus containing 9,550 sentences (comments on social media posts) on three topics related to the COVID-19 pandemic was prepared.
The system that won the first place in both tasks used the NLI (Natural Language Inference) variant of the BERT architecture.
arXiv Detail & Related papers (2022-06-18T17:13:37Z) - Fine-tuning GPT-3 for Russian Text Summarization [77.34726150561087]
This paper showcases ruGPT3 ability to summarize texts, fine-tuning it on the corpora of Russian news with their corresponding human-generated summaries.
We evaluate the resulting texts with a set of metrics, showing that our solution can surpass the state-of-the-art model's performance without additional changes in architecture or loss function.
arXiv Detail & Related papers (2021-08-07T19:01:40Z) - Traditional Machine Learning and Deep Learning Models for Argumentation
Mining in Russian Texts [0.0]
A significant obstacle to research in this area for the Russian language is the lack of annotated Russian-language text corpora.
This article explores the possibility of improving the quality of argumentation mining using the extension of the Russian-language version of the Argumentative Micro Corpus (ArgMicro) based on the machine translation of the Persuasive Essays Corpus (PersEssays)
We solve the problem of classifying argumentative discourse units (ADUs) into two classes - "pro" ("for") and "opp" ("against") using traditional machine learning techniques (SVM, Bagging and XGBoost) and a deep neural
arXiv Detail & Related papers (2021-06-28T07:44:43Z) - Generating Informative Conclusions for Argumentative Texts [32.3103908466811]
The purpose of an argumentative text is to support a certain conclusion.
An explicit conclusion makes for a good candidate summary of an argumentative text.
This is especially true if the conclusion is informative, emphasizing specific concepts from the text.
arXiv Detail & Related papers (2021-06-02T10:35:59Z) - Critical Thinking for Language Models [6.963299759354333]
This paper takes a first step towards a critical thinking curriculum for neural auto-regressive language models.
We generate artificial argumentative texts to train and evaluate GPT-2.
We obtain consistent and promising results for NLU benchmarks.
arXiv Detail & Related papers (2020-09-15T15:49:19Z) - Aspect-Controlled Neural Argument Generation [65.91772010586605]
We train a language model for argument generation that can be controlled on a fine-grained level to generate sentence-level arguments for a given topic, stance, and aspect.
Our evaluation shows that our generation model is able to generate high-quality, aspect-specific arguments.
These arguments can be used to improve the performance of stance detection models via data augmentation and to generate counter-arguments.
arXiv Detail & Related papers (2020-04-30T20:17:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.