Evaluation of Text Generation: A Survey
- URL: http://arxiv.org/abs/2006.14799v2
- Date: Tue, 18 May 2021 07:04:41 GMT
- Title: Evaluation of Text Generation: A Survey
- Authors: Asli Celikyilmaz, Elizabeth Clark, Jianfeng Gao
- Abstract summary: The paper surveys evaluation methods of natural language generation systems that have been developed in the last few years.
We group NLG evaluation methods into three categories: (1) human-centric evaluation metrics, (2) automatic metrics that require no training, and (3) machine-learned metrics.
- Score: 107.62760642328455
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The paper surveys evaluation methods of natural language generation (NLG)
systems that have been developed in the last few years. We group NLG evaluation
methods into three categories: (1) human-centric evaluation metrics, (2)
automatic metrics that require no training, and (3) machine-learned metrics.
For each category, we discuss the progress that has been made and the
challenges still being faced, with a focus on the evaluation of recently
proposed NLG tasks and neural NLG models. We then present two examples for
task-specific NLG evaluations for automatic text summarization and long text
generation, and conclude the paper by proposing future research directions.
Related papers
- Systematic Task Exploration with LLMs: A Study in Citation Text Generation [63.50597360948099]
Large language models (LLMs) bring unprecedented flexibility in defining and executing complex, creative natural language generation (NLG) tasks.
We propose a three-component research framework that consists of systematic input manipulation, reference data, and output measurement.
We use this framework to explore citation text generation -- a popular scholarly NLP task that lacks consensus on the task definition and evaluation metric.
arXiv Detail & Related papers (2024-07-04T16:41:08Z) - Leveraging Large Language Models for NLG Evaluation: Advances and Challenges [57.88520765782177]
Large Language Models (LLMs) have opened new avenues for assessing generated content quality, e.g., coherence, creativity, and context relevance.
We propose a coherent taxonomy for organizing existing LLM-based evaluation metrics, offering a structured framework to understand and compare these methods.
By discussing unresolved challenges, including bias, robustness, domain-specificity, and unified evaluation, this paper seeks to offer insights to researchers and advocate for fairer and more advanced NLG evaluation techniques.
arXiv Detail & Related papers (2024-01-13T15:59:09Z) - Evaluation Metrics of Language Generation Models for Synthetic Traffic
Generation Tasks [22.629816738693254]
We show that common NLG metrics, like BLEU, are not suitable for evaluating Synthetic Traffic Generation (STG)
We propose and evaluate several metrics designed to compare the generated traffic to the distribution of real user texts.
arXiv Detail & Related papers (2023-11-21T11:26:26Z) - G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment [64.01972723692587]
We present G-Eval, a framework of using large language models with chain-of-thoughts (CoT) and a form-filling paradigm to assess the quality of NLG outputs.
We show that G-Eval with GPT-4 as the backbone model achieves a Spearman correlation of 0.514 with human on summarization task, outperforming all previous methods by a large margin.
arXiv Detail & Related papers (2023-03-29T12:46:54Z) - Towards a Unified Multi-Dimensional Evaluator for Text Generation [101.47008809623202]
We propose a unified multi-dimensional evaluator UniEval for Natural Language Generation (NLG)
We re-frame NLG evaluation as a Boolean Question Answering (QA) task, and by guiding the model with different questions, we can use one evaluator to evaluate from multiple dimensions.
Experiments on three typical NLG tasks show that UniEval correlates substantially better with human judgments than existing metrics.
arXiv Detail & Related papers (2022-10-13T17:17:03Z) - Repairing the Cracked Foundation: A Survey of Obstacles in Evaluation
Practices for Generated Text [23.119724118572538]
Evaluation practices in natural language generation (NLG) have many known flaws, but improved evaluation approaches are rarely widely adopted.
This paper surveys the issues with human and automatic model evaluations and with commonly used datasets in NLG.
arXiv Detail & Related papers (2022-02-14T18:51:07Z) - The GEM Benchmark: Natural Language Generation, its Evaluation and
Metrics [66.96150429230035]
We introduce GEM, a living benchmark for natural language Generation (NLG), its Evaluation, and Metrics.
Regular updates to the benchmark will help NLG research become more multilingual and evolve the challenge alongside models.
arXiv Detail & Related papers (2021-02-02T18:42:05Z) - A Survey of Evaluation Metrics Used for NLG Systems [19.20118684502313]
The success of Deep Learning has created a surge in interest in a wide a range of Natural Language Generation (NLG) tasks.
Unlike classification tasks, automatically evaluating NLG systems in itself is a huge challenge.
The expanding number of NLG models and the shortcomings of the current metrics has led to a rapid surge in the number of evaluation metrics proposed since 2014.
arXiv Detail & Related papers (2020-08-27T09:25:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.