Can we obtain significant success in RST discourse parsing by using
Large Language Models?
- URL: http://arxiv.org/abs/2403.05065v1
- Date: Fri, 8 Mar 2024 05:34:29 GMT
- Title: Can we obtain significant success in RST discourse parsing by using
Large Language Models?
- Authors: Aru Maekawa, Tsutomu Hirao, Hidetaka Kamigaito, Manabu Okumura
- Abstract summary: decoder-only large language models (LLMs) have significantly impacted a wide range of natural language processing (NLP) tasks.
This paper explores how beneficial such LLMs are for Rhetorical Structure Theory (RST) discourse parsing.
Experimental results on three benchmark datasets, RST-DT, Instr-DT, and the GUM corpus, demonstrate that Llama 2 with 70 billion parameters in the bottom-up strategy obtained state-of-the-art results with significant differences.
- Score: 32.94244684710954
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, decoder-only pre-trained large language models (LLMs), with several
tens of billion parameters, have significantly impacted a wide range of natural
language processing (NLP) tasks. While encoder-only or encoder-decoder
pre-trained language models have already proved to be effective in discourse
parsing, the extent to which LLMs can perform this task remains an open
research question. Therefore, this paper explores how beneficial such LLMs are
for Rhetorical Structure Theory (RST) discourse parsing. Here, the parsing
process for both fundamental top-down and bottom-up strategies is converted
into prompts, which LLMs can work with. We employ Llama 2 and fine-tune it with
QLoRA, which has fewer parameters that can be tuned. Experimental results on
three benchmark datasets, RST-DT, Instr-DT, and the GUM corpus, demonstrate
that Llama 2 with 70 billion parameters in the bottom-up strategy obtained
state-of-the-art (SOTA) results with significant differences. Furthermore, our
parsers demonstrated generalizability when evaluated on RST-DT, showing that,
in spite of being trained with the GUM corpus, it obtained similar performances
to those of existing parsers trained with RST-DT.
Related papers
- Prefix Text as a Yarn: Eliciting Non-English Alignment in Foundation Language Model [50.339632513018934]
supervised fine-tuning (SFT) has been a straightforward approach for tailoring the output of foundation large language model (LLM) to specific preferences.
We critically examine this hypothesis within the scope of cross-lingual generation tasks.
We introduce a novel training-free alignment method named PreTTY, which employs minimal task-related prior tokens.
arXiv Detail & Related papers (2024-04-25T17:19:36Z) - Large Language Models: A Survey [69.72787936480394]
Large Language Models (LLMs) have drawn a lot of attention due to their strong performance on a wide range of natural language tasks.
LLMs' ability of general-purpose language understanding and generation is acquired by training billions of model's parameters on massive amounts of text data.
arXiv Detail & Related papers (2024-02-09T05:37:09Z) - Adapting Large Language Models for Document-Level Machine Translation [46.370862171452444]
Large language models (LLMs) have significantly advanced various natural language processing (NLP) tasks.
Recent research indicates that moderately-sized LLMs often outperform larger ones after task-specific fine-tuning.
This study focuses on adapting LLMs for document-level machine translation (DocMT) for specific language pairs.
arXiv Detail & Related papers (2024-01-12T09:29:13Z) - Speech Translation with Large Language Models: An Industrial Practice [64.5419534101104]
We introduce LLM-ST, a novel and effective speech translation model constructed upon a pre-trained large language model (LLM)
By integrating the large language model (LLM) with a speech encoder and employing multi-task instruction tuning, LLM-ST can produce accurate timestamped transcriptions and translations.
Through rigorous experimentation on English and Chinese datasets, we showcase the exceptional performance of LLM-ST.
arXiv Detail & Related papers (2023-12-21T05:32:49Z) - Constituency Parsing using LLMs [22.932447078664232]
Constituency parsing is a fundamental yet unsolved natural language processing task.
We employ three linearization strategies to transform output trees into symbol sequences, such that LLMs can solve constituency parsing by generating linearized trees.
Our findings reveal insights into LLMs' performance, generalization abilities, and challenges in constituency parsing.
arXiv Detail & Related papers (2023-10-30T11:39:11Z) - LLM-augmented Preference Learning from Natural Language [19.700169351688768]
Large Language Models (LLMs) are equipped to deal with larger context lengths.
LLMs can consistently outperform the SotA when the target text is large.
Few-shot learning yields better performance than zero-shot learning.
arXiv Detail & Related papers (2023-10-12T17:17:27Z) - LLM-Pruner: On the Structural Pruning of Large Language Models [65.02607075556742]
Large language models (LLMs) have shown remarkable capabilities in language understanding and generation.
We tackle the compression of LLMs within the bound of two constraints: being task-agnostic and minimizing the reliance on the original training dataset.
Our method, named LLM-Pruner, adopts structural pruning that selectively removes non-critical coupled structures.
arXiv Detail & Related papers (2023-05-19T12:10:53Z) - Pre-trained Language Models for Keyphrase Generation: A Thorough
Empirical Study [76.52997424694767]
We present an in-depth empirical study of keyphrase extraction and keyphrase generation using pre-trained language models.
We show that PLMs have competitive high-resource performance and state-of-the-art low-resource performance.
Further results show that in-domain BERT-like PLMs can be used to build strong and data-efficient keyphrase generation models.
arXiv Detail & Related papers (2022-12-20T13:20:21Z) - Large Language Models are Zero-Shot Reasoners [28.6899375595088]
Chain of thought (CoT) prompting is a technique for eliciting complex multi-step reasoning through step-by-step answer examples.
We show that LLMs are decent zero-shot reasoners by simply adding Let's think step by step'' before each answer.
Experimental results demonstrate that our Zero-shot-CoT, using the same single prompt template, significantly outperforms zero-shot LLM performances.
arXiv Detail & Related papers (2022-05-24T09:22:26Z) - An Exploration of Prompt Tuning on Generative Spoken Language Model for
Speech Processing Tasks [112.1942546460814]
We report the first exploration of the prompt tuning paradigm for speech processing tasks based on Generative Spoken Language Model (GSLM)
Experiment results show that the prompt tuning technique achieves competitive performance in speech classification tasks with fewer trainable parameters than fine-tuning specialized downstream models.
arXiv Detail & Related papers (2022-03-31T03:26:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.