All That's 'Human' Is Not Gold: Evaluating Human Evaluation of Generated
Text
- URL: http://arxiv.org/abs/2107.00061v1
- Date: Wed, 30 Jun 2021 19:00:25 GMT
- Title: All That's 'Human' Is Not Gold: Evaluating Human Evaluation of Generated
Text
- Authors: Elizabeth Clark, Tal August, Sofia Serrano, Nikita Haduong, Suchin
Gururangan, Noah A. Smith
- Abstract summary: We run a study assessing non-experts' ability to distinguish between human- and machine-authored text.
We find that, without training, evaluators distinguished between GPT3- and human-authored text at random chance level.
- Score: 46.260544251940125
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human evaluations are typically considered the gold standard in natural
language generation, but as models' fluency improves, how well can evaluators
detect and judge machine-generated text? We run a study assessing non-experts'
ability to distinguish between human- and machine-authored text (GPT2 and GPT3)
in three domains (stories, news articles, and recipes). We find that, without
training, evaluators distinguished between GPT3- and human-authored text at
random chance level. We explore three approaches for quickly training
evaluators to better identify GPT3-authored text (detailed instructions,
annotated examples, and paired examples) and find that while evaluators'
accuracy improved up to 55%, it did not significantly improve across the three
domains. Given the inconsistent results across text domains and the often
contradictory reasons evaluators gave for their judgments, we examine the role
untrained human evaluations play in NLG evaluation and provide recommendations
to NLG researchers for improving human evaluations of text generated from
state-of-the-art models.
Related papers
- X-Eval: Generalizable Multi-aspect Text Evaluation via Augmented Instruction Tuning with Auxiliary Evaluation Aspects [32.50977115108103]
We introduce X-Eval, a two-stage instruction tuning framework to evaluate the text in both seen and unseen aspects customized by end users.
X-Eval consists of two learning stages: the vanilla instruction tuning stage that improves the model's ability to follow evaluation instructions, and an enhanced instruction tuning stage that exploits the connections between fine-grained evaluation aspects to better assess text quality.
arXiv Detail & Related papers (2023-11-15T09:01:55Z) - INSTRUCTSCORE: Explainable Text Generation Evaluation with Finegrained
Feedback [80.57617091714448]
We present InstructScore, an explainable evaluation metric for text generation.
We fine-tune a text evaluation metric based on LLaMA, producing a score for generated text and a human readable diagnostic report.
arXiv Detail & Related papers (2023-05-23T17:27:22Z) - Human-like Summarization Evaluation with ChatGPT [38.39767193442397]
ChatGPT was able to complete annotations relatively smoothly using Likert scale scoring, pairwise comparison, Pyramid, and binary factuality evaluation.
It outperformed commonly used automatic evaluation metrics on some datasets.
arXiv Detail & Related papers (2023-04-05T16:17:32Z) - Toward Verifiable and Reproducible Human Evaluation for Text-to-Image
Generation [35.8129864412223]
This paper proposes a standardized and well-defined human evaluation protocol.
We experimentally show that the current automatic measures are incompatible with human perception.
We provide insights for designing human evaluation experiments reliably and conclusively.
arXiv Detail & Related papers (2023-04-04T14:14:16Z) - Exploring the Use of Large Language Models for Reference-Free Text
Quality Evaluation: An Empirical Study [63.27346930921658]
ChatGPT is capable of evaluating text quality effectively from various perspectives without reference.
The Explicit Score, which utilizes ChatGPT to generate a numeric score measuring text quality, is the most effective and reliable method among the three exploited approaches.
arXiv Detail & Related papers (2023-04-03T05:29:58Z) - Revisiting the Gold Standard: Grounding Summarization Evaluation with
Robust Human Evaluation [136.16507050034755]
Existing human evaluation studies for summarization either exhibit a low inter-annotator agreement or have insufficient scale.
We propose a modified summarization salience protocol, Atomic Content Units (ACUs), which is based on fine-grained semantic units.
We curate the Robust Summarization Evaluation (RoSE) benchmark, a large human evaluation dataset consisting of 22,000 summary-level annotations over 28 top-performing systems.
arXiv Detail & Related papers (2022-12-15T17:26:05Z) - Towards a Unified Multi-Dimensional Evaluator for Text Generation [101.47008809623202]
We propose a unified multi-dimensional evaluator UniEval for Natural Language Generation (NLG)
We re-frame NLG evaluation as a Boolean Question Answering (QA) task, and by guiding the model with different questions, we can use one evaluator to evaluate from multiple dimensions.
Experiments on three typical NLG tasks show that UniEval correlates substantially better with human judgments than existing metrics.
arXiv Detail & Related papers (2022-10-13T17:17:03Z) - Evaluation of Text Generation: A Survey [107.62760642328455]
The paper surveys evaluation methods of natural language generation systems that have been developed in the last few years.
We group NLG evaluation methods into three categories: (1) human-centric evaluation metrics, (2) automatic metrics that require no training, and (3) machine-learned metrics.
arXiv Detail & Related papers (2020-06-26T04:52:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.