When Automated Assessment Meets Automated Content Generation: Examining
Text Quality in the Era of GPTs
- URL: http://arxiv.org/abs/2309.14488v1
- Date: Mon, 25 Sep 2023 19:32:18 GMT
- Title: When Automated Assessment Meets Automated Content Generation: Examining
Text Quality in the Era of GPTs
- Authors: Marialena Bevilacqua, Kezia Oketch, Ruiyang Qin, Will Stamey, Xinyuan
Zhang, Yi Gan, Kai Yang, Ahmed Abbasi
- Abstract summary: We empirically assess the differences in how ML-based scoring models trained on human content assess the quality of content generated by humans versus GPTs.
Results of our benchmark analysis reveal that transformer pretrained language models (PLMs) more accurately score human essay quality as compared to CNN/RNN and feature-based ML methods.
- Score: 5.952677937197871
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The use of machine learning (ML) models to assess and score textual data has
become increasingly pervasive in an array of contexts including natural
language processing, information retrieval, search and recommendation, and
credibility assessment of online content. A significant disruption at the
intersection of ML and text are text-generating large-language models such as
generative pre-trained transformers (GPTs). We empirically assess the
differences in how ML-based scoring models trained on human content assess the
quality of content generated by humans versus GPTs. To do so, we propose an
analysis framework that encompasses essay scoring ML-models, human and
ML-generated essays, and a statistical model that parsimoniously considers the
impact of type of respondent, prompt genre, and the ML model used for
assessment model. A rich testbed is utilized that encompasses 18,460
human-generated and GPT-based essays. Results of our benchmark analysis reveal
that transformer pretrained language models (PLMs) more accurately score human
essay quality as compared to CNN/RNN and feature-based ML methods.
Interestingly, we find that the transformer PLMs tend to score GPT-generated
text 10-15\% higher on average, relative to human-authored documents.
Conversely, traditional deep learning and feature-based ML models score human
text considerably higher. Further analysis reveals that although the
transformer PLMs are exclusively fine-tuned on human text, they more
prominently attend to certain tokens appearing only in GPT-generated text,
possibly due to familiarity/overlap in pre-training. Our framework and results
have implications for text classification settings where automated scoring of
text is likely to be disrupted by generative AI.
Related papers
- Distinguishing Chatbot from Human [1.1249583407496218]
We develop a new dataset consisting of more than 750,000 human-written paragraphs.
Based on this dataset, we apply Machine Learning (ML) techniques to determine the origin of text.
Our proposed solutions offer high classification accuracy and serve as useful tools for textual analysis.
arXiv Detail & Related papers (2024-08-03T13:18:04Z) - A Simple yet Efficient Ensemble Approach for AI-generated Text Detection [0.5840089113969194]
Large Language Models (LLMs) have demonstrated remarkable capabilities in generating text that closely resembles human writing.
It is essential to build automated approaches capable of distinguishing between artificially generated text and human-authored text.
We propose a simple yet efficient solution by ensembling predictions from multiple constituent LLMs.
arXiv Detail & Related papers (2023-11-06T13:11:02Z) - Studying the impacts of pre-training using ChatGPT-generated text on
downstream tasks [0.0]
Our research aims to investigate the influence of artificial text in the pre-training phase of language models.
We conducted a comparative analysis between a language model, RoBERTa, pre-trained using CNN/DailyMail news articles, and ChatGPT, which employed the same articles for its training.
We demonstrate that the utilization of artificial text during pre-training does not have a significant impact on either the performance of the models in downstream tasks or their gender bias.
arXiv Detail & Related papers (2023-09-02T12:56:15Z) - The Devil is in the Errors: Leveraging Large Language Models for
Fine-grained Machine Translation Evaluation [93.01964988474755]
AutoMQM is a prompting technique which asks large language models to identify and categorize errors in translations.
We study the impact of labeled data through in-context learning and finetuning.
We then evaluate AutoMQM with PaLM-2 models, and we find that it improves performance compared to just prompting for scores.
arXiv Detail & Related papers (2023-08-14T17:17:21Z) - The Imitation Game: Detecting Human and AI-Generated Texts in the Era of
ChatGPT and BARD [3.2228025627337864]
We introduce a novel dataset of human-written and AI-generated texts in different genres.
We employ several machine learning models to classify the texts.
Results demonstrate the efficacy of these models in discerning between human and AI-generated text.
arXiv Detail & Related papers (2023-07-22T21:00:14Z) - Smaller Language Models are Better Black-box Machine-Generated Text
Detectors [56.36291277897995]
Small and partially-trained models are better universal text detectors.
We find that whether the detector and generator were trained on the same data is not critically important to the detection success.
For instance, the OPT-125M model has an AUC of 0.81 in detecting ChatGPT generations, whereas a larger model from the GPT family, GPTJ-6B, has AUC of 0.45.
arXiv Detail & Related papers (2023-05-17T00:09:08Z) - G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment [64.01972723692587]
We present G-Eval, a framework of using large language models with chain-of-thoughts (CoT) and a form-filling paradigm to assess the quality of NLG outputs.
We show that G-Eval with GPT-4 as the backbone model achieves a Spearman correlation of 0.514 with human on summarization task, outperforming all previous methods by a large margin.
arXiv Detail & Related papers (2023-03-29T12:46:54Z) - Large Language Models are Diverse Role-Players for Summarization
Evaluation [82.31575622685902]
A document summary's quality can be assessed by human annotators on various criteria, both objective ones like grammar and correctness, and subjective ones like informativeness, succinctness, and appeal.
Most of the automatic evaluation methods like BLUE/ROUGE may be not able to adequately capture the above dimensions.
We propose a new evaluation framework based on LLMs, which provides a comprehensive evaluation framework by comparing generated text and reference text from both objective and subjective aspects.
arXiv Detail & Related papers (2023-03-27T10:40:59Z) - How much do language models copy from their training data? Evaluating
linguistic novelty in text generation using RAVEN [63.79300884115027]
Current language models can generate high-quality text.
Are they simply copying text they have seen before, or have they learned generalizable linguistic abstractions?
We introduce RAVEN, a suite of analyses for assessing the novelty of generated text.
arXiv Detail & Related papers (2021-11-18T04:07:09Z) - TextFlint: Unified Multilingual Robustness Evaluation Toolkit for
Natural Language Processing [73.16475763422446]
We propose a multilingual robustness evaluation platform for NLP tasks (TextFlint)
It incorporates universal text transformation, task-specific transformation, adversarial attack, subpopulation, and their combinations to provide comprehensive robustness analysis.
TextFlint generates complete analytical reports as well as targeted augmented data to address the shortcomings of the model's robustness.
arXiv Detail & Related papers (2021-03-21T17:20:38Z) - Automating Text Naturalness Evaluation of NLG Systems [0.0]
We present an attempt to automate the evaluation of text naturalness.
Instead of relying on human participants for scoring or labeling the text samples, we propose to automate the process.
We analyze the text probability fractions and observe how they are influenced by the size of the generative and discriminative models involved in the process.
arXiv Detail & Related papers (2020-06-23T18:48:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.