Automatic Construction of Evaluation Suites for Natural Language
Generation Datasets
- URL: http://arxiv.org/abs/2106.09069v1
- Date: Wed, 16 Jun 2021 18:20:58 GMT
- Title: Automatic Construction of Evaluation Suites for Natural Language
Generation Datasets
- Authors: Simon Mille, Kaustubh D. Dhole, Saad Mahamood, Laura
Perez-Beltrachini, Varun Gangal, Mihir Kale, Emiel van Miltenburg, Sebastian
Gehrmann
- Abstract summary: We develop a framework to generate controlled perturbations and identify subsets in text-to-scalar, text-to-text, or data-to-text settings.
We propose an evaluation suite made of 80 challenge sets, demonstrate the kinds of analyses that it enables and shed light onto the limits of current generation models.
- Score: 17.13484629172643
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning approaches applied to NLP are often evaluated by summarizing
their performance in a single number, for example accuracy. Since most test
sets are constructed as an i.i.d. sample from the overall data, this approach
overly simplifies the complexity of language and encourages overfitting to the
head of the data distribution. As such, rare language phenomena or text about
underrepresented groups are not equally included in the evaluation. To
encourage more in-depth model analyses, researchers have proposed the use of
multiple test sets, also called challenge sets, that assess specific
capabilities of a model. In this paper, we develop a framework based on this
idea which is able to generate controlled perturbations and identify subsets in
text-to-scalar, text-to-text, or data-to-text settings. By applying this
framework to the GEM generation benchmark, we propose an evaluation suite made
of 80 challenge sets, demonstrate the kinds of analyses that it enables and
shed light onto the limits of current generation models.
Related papers
- Zero-shot LLM-guided Counterfactual Generation for Text [15.254775341371364]
We propose a structured way to utilize large language models (LLMs) as general purpose counterfactual example generators.
We demonstrate the efficacy of LLMs as zero-shot counterfactual generators in evaluating and explaining black-box NLP models.
arXiv Detail & Related papers (2024-05-08T03:57:45Z) - Exploring Precision and Recall to assess the quality and diversity of LLMs [82.21278402856079]
We introduce a novel evaluation framework for Large Language Models (LLMs) such as textscLlama-2 and textscMistral.
This approach allows for a nuanced assessment of the quality and diversity of generated text without the need for aligned corpora.
arXiv Detail & Related papers (2024-02-16T13:53:26Z) - Generative Judge for Evaluating Alignment [84.09815387884753]
We propose a generative judge with 13B parameters, Auto-J, designed to address these challenges.
Our model is trained on user queries and LLM-generated responses under massive real-world scenarios.
Experimentally, Auto-J outperforms a series of strong competitors, including both open-source and closed-source models.
arXiv Detail & Related papers (2023-10-09T07:27:15Z) - Short Answer Grading Using One-shot Prompting and Text Similarity
Scoring Model [2.14986347364539]
We developed an automated short answer grading model that provided both analytic scores and holistic scores.
The accuracy and quadratic weighted kappa of our model were 0.67 and 0.71 on a subset of the publicly available ASAG dataset.
arXiv Detail & Related papers (2023-05-29T22:05:29Z) - Large Language Models are Diverse Role-Players for Summarization
Evaluation [82.31575622685902]
A document summary's quality can be assessed by human annotators on various criteria, both objective ones like grammar and correctness, and subjective ones like informativeness, succinctness, and appeal.
Most of the automatic evaluation methods like BLUE/ROUGE may be not able to adequately capture the above dimensions.
We propose a new evaluation framework based on LLMs, which provides a comprehensive evaluation framework by comparing generated text and reference text from both objective and subjective aspects.
arXiv Detail & Related papers (2023-03-27T10:40:59Z) - Sentiment Analysis on Brazilian Portuguese User Reviews [0.0]
This work analyzes the predictive performance of a range of document embedding strategies, assuming the polarity as the system outcome.
This analysis includes five sentiment analysis datasets in Brazilian Portuguese, unified in a single dataset, and a reference partitioning in training, testing, and validation sets, both made publicly available through a digital repository.
arXiv Detail & Related papers (2021-12-10T11:18:26Z) - A Single Example Can Improve Zero-Shot Data Generation [7.237231992155901]
Sub-tasks of intent classification require extensive and flexible datasets for experiments and evaluation.
We propose to use text generation methods to gather datasets.
We explore two approaches to generating task-oriented utterances.
arXiv Detail & Related papers (2021-08-16T09:43:26Z) - Exemplar-Controllable Paraphrasing and Translation using Bitext [57.92051459102902]
We adapt models from prior work to be able to learn solely from bilingual text (bitext)
Our single proposed model can perform four tasks: controlled paraphrase generation in both languages and controlled machine translation in both language directions.
arXiv Detail & Related papers (2020-10-12T17:02:50Z) - A Revised Generative Evaluation of Visual Dialogue [80.17353102854405]
We propose a revised evaluation scheme for the VisDial dataset.
We measure consensus between answers generated by the model and a set of relevant answers.
We release these sets and code for the revised evaluation scheme as DenseVisDial.
arXiv Detail & Related papers (2020-04-20T13:26:45Z) - ORB: An Open Reading Benchmark for Comprehensive Evaluation of Machine
Reading Comprehension [53.037401638264235]
We present an evaluation server, ORB, that reports performance on seven diverse reading comprehension datasets.
The evaluation server places no restrictions on how models are trained, so it is a suitable test bed for exploring training paradigms and representation learning.
arXiv Detail & Related papers (2019-12-29T07:27:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.