Sentence Simplification via Large Language Models
- URL: http://arxiv.org/abs/2302.11957v1
- Date: Thu, 23 Feb 2023 12:11:58 GMT
- Title: Sentence Simplification via Large Language Models
- Authors: Yutao Feng and Jipeng Qiang and Yun Li and Yunhao Yuan and Yi Zhu
- Abstract summary: Sentence Simplification aims to rephrase complex sentences into simpler sentences while retaining original meaning.
Large Language models (LLMs) have demonstrated the ability to perform a variety of natural language processing tasks.
- Score: 15.07021692249856
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sentence Simplification aims to rephrase complex sentences into simpler
sentences while retaining original meaning. Large Language models (LLMs) have
demonstrated the ability to perform a variety of natural language processing
tasks. However, it is not yet known whether LLMs can be served as a
high-quality sentence simplification system. In this work, we empirically
analyze the zero-/few-shot learning ability of LLMs by evaluating them on a
number of benchmark test sets. Experimental results show LLMs outperform
state-of-the-art sentence simplification methods, and are judged to be on a par
with human annotators.
Related papers
- Aligning Sentence Simplification with ESL Learner's Proficiency for Language Acquisition [11.700462697630696]
This study aims to facilitate English as a Second Language learners' language acquisition by simplification.
We propose simplifying complex sentences to appropriate levels for learners while also increasing vocabulary coverage of the target level in the simplifications.
Our method employs token-level and sentence-level rewards, and iteratively trains the model on its self-generated outputs to guide the model to search for simplification hypotheses that satisfy the target attributes.
arXiv Detail & Related papers (2025-02-17T05:32:56Z) - Redefining Simplicity: Benchmarking Large Language Models from Lexical to Document Simplification [21.727596753351072]
Text simplification (TS) refers to the process of reducing the complexity of a text while retaining its original meaning and key information.
Existing work only shows that large language models (LLMs) have outperformed supervised non-LLM-based methods on sentence simplification.
arXiv Detail & Related papers (2025-02-12T10:38:22Z) - Progressive Document-level Text Simplification via Large Language Models [19.57555397986868]
Long document-level simplification (DS) is still relatively unexplored.
We propose a progressive simplification method (ProgDS) by hierarchically decomposing the task.
arXiv Detail & Related papers (2025-01-07T15:14:37Z) - Potential and Limitations of LLMs in Capturing Structured Semantics: A Case Study on SRL [78.80673954827773]
Large Language Models (LLMs) play a crucial role in capturing structured semantics to enhance language understanding, improve interpretability, and reduce bias.
We propose using Semantic Role Labeling (SRL) as a fundamental task to explore LLMs' ability to extract structured semantics.
We find interesting potential: LLMs can indeed capture semantic structures, and scaling-up doesn't always mirror potential.
We are surprised to discover that significant overlap in the errors is made by both LLMs and untrained humans, accounting for almost 30% of all errors.
arXiv Detail & Related papers (2024-05-10T11:44:05Z) - How Proficient Are Large Language Models in Formal Languages? An In-Depth Insight for Knowledge Base Question Answering [52.86931192259096]
Knowledge Base Question Answering (KBQA) aims to answer natural language questions based on facts in knowledge bases.
Recent works leverage the capabilities of large language models (LLMs) for logical form generation to improve performance.
arXiv Detail & Related papers (2024-01-11T09:27:50Z) - Benchmarking Generation and Evaluation Capabilities of Large Language Models for Instruction Controllable Summarization [132.25202059478065]
We benchmark large language models (LLMs) on instruction controllable text summarization.
Our study reveals that instruction controllable text summarization remains a challenging task for LLMs.
arXiv Detail & Related papers (2023-11-15T18:25:26Z) - Large Language Models can Contrastively Refine their Generation for Better Sentence Representation Learning [57.74233319453229]
Large language models (LLMs) have emerged as a groundbreaking technology and their unparalleled text generation capabilities have sparked interest in their application to the fundamental sentence representation learning task.
We propose MultiCSR, a multi-level contrastive sentence representation learning framework that decomposes the process of prompting LLMs to generate a corpus.
Our experiments reveal that MultiCSR enables a less advanced LLM to surpass the performance of ChatGPT, while applying it to ChatGPT achieves better state-of-the-art results.
arXiv Detail & Related papers (2023-10-17T03:21:43Z) - A New Dataset and Empirical Study for Sentence Simplification in Chinese [50.0624778757462]
This paper introduces CSS, a new dataset for assessing sentence simplification in Chinese.
We collect manual simplifications from human annotators and perform data analysis to show the difference between English and Chinese sentence simplifications.
In the end, we explore whether Large Language Models can serve as high-quality Chinese sentence simplification systems by evaluating them on CSS.
arXiv Detail & Related papers (2023-06-07T06:47:34Z) - Alleviating Over-smoothing for Unsupervised Sentence Representation [96.19497378628594]
We present a Simple method named Self-Contrastive Learning (SSCL) to alleviate this issue.
Our proposed method is quite simple and can be easily extended to various state-of-the-art models for performance boosting.
arXiv Detail & Related papers (2023-05-09T11:00:02Z) - Enhancing Pre-trained Language Model with Lexical Simplification [41.34550924004487]
lexical simplification (LS) is a recognized method to reduce such lexical diversity.
We propose a novel approach which can effectively improve the performance of PrLMs in text classification.
arXiv Detail & Related papers (2020-12-30T07:49:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.