GPT-generated Text Detection: Benchmark Dataset and Tensor-based
Detection Method
- URL: http://arxiv.org/abs/2403.07321v1
- Date: Tue, 12 Mar 2024 05:15:21 GMT
- Title: GPT-generated Text Detection: Benchmark Dataset and Tensor-based
Detection Method
- Authors: Zubair Qazi, William Shiao, and Evangelos E. Papalexakis
- Abstract summary: We present GPT Reddit dataset (GRiD), a novel Generative Pretrained Transformer (GPT)-generated text detection dataset.
The dataset consists of context-prompt pairs based on Reddit with human-generated and ChatGPT-generated responses.
To showcase the dataset's utility, we benchmark several detection methods on it, demonstrating their efficacy in distinguishing between human and ChatGPT-generated responses.
- Score: 4.802604527842989
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As natural language models like ChatGPT become increasingly prevalent in
applications and services, the need for robust and accurate methods to detect
their output is of paramount importance. In this paper, we present GPT Reddit
Dataset (GRiD), a novel Generative Pretrained Transformer (GPT)-generated text
detection dataset designed to assess the performance of detection models in
identifying generated responses from ChatGPT. The dataset consists of a diverse
collection of context-prompt pairs based on Reddit, with human-generated and
ChatGPT-generated responses. We provide an analysis of the dataset's
characteristics, including linguistic diversity, context complexity, and
response quality. To showcase the dataset's utility, we benchmark several
detection methods on it, demonstrating their efficacy in distinguishing between
human and ChatGPT-generated responses. This dataset serves as a resource for
evaluating and advancing detection techniques in the context of ChatGPT and
contributes to the ongoing efforts to ensure responsible and trustworthy
AI-driven communication on the internet. Finally, we propose GpTen, a novel
tensor-based GPT text detection method that is semi-supervised in nature since
it only has access to human-generated text and performs on par with
fully-supervised baselines.
Related papers
- Spotting AI's Touch: Identifying LLM-Paraphrased Spans in Text [61.22649031769564]
We propose a novel framework, paraphrased text span detection (PTD)
PTD aims to identify paraphrased text spans within a text.
We construct a dedicated dataset, PASTED, for paraphrased text span detection.
arXiv Detail & Related papers (2024-05-21T11:22:27Z) - On the Generalization of Training-based ChatGPT Detection Methods [33.46128880100525]
ChatGPT is one of the most popular language models which achieve amazing performance on various natural language tasks.
There is also an urgent need to detect the texts generated ChatGPT from human written.
arXiv Detail & Related papers (2023-10-02T16:13:08Z) - Detecting ChatGPT: A Survey of the State of Detecting ChatGPT-Generated
Text [1.9643748953805937]
generative language models can potentially deceive by generating artificial text that appears to be human-generated.
This survey provides an overview of the current approaches employed to differentiate between texts generated by humans and ChatGPT.
arXiv Detail & Related papers (2023-09-14T13:05:20Z) - Is ChatGPT Involved in Texts? Measure the Polish Ratio to Detect
ChatGPT-Generated Text [48.36706154871577]
We introduce a novel dataset termed HPPT (ChatGPT-polished academic abstracts)
It diverges from extant corpora by comprising pairs of human-written and ChatGPT-polished abstracts instead of purely ChatGPT-generated texts.
We also propose the "Polish Ratio" method, an innovative measure of the degree of modification made by ChatGPT compared to the original human-written text.
arXiv Detail & Related papers (2023-07-21T06:38:37Z) - TextFormer: A Query-based End-to-End Text Spotter with Mixed Supervision [61.186488081379]
We propose TextFormer, a query-based end-to-end text spotter with Transformer architecture.
TextFormer builds upon an image encoder and a text decoder to learn a joint semantic understanding for multi-task modeling.
It allows for mutual training and optimization of classification, segmentation, and recognition branches, resulting in deeper feature sharing.
arXiv Detail & Related papers (2023-06-06T03:37:41Z) - GPT-Sentinel: Distinguishing Human and ChatGPT Generated Content [27.901155229342375]
We present a novel approach for detecting ChatGPT-generated vs. human-written text using language models.
Our models achieved remarkable results, with an accuracy of over 97% on the test dataset, as evaluated through various metrics.
arXiv Detail & Related papers (2023-05-13T17:12:11Z) - On the Possibilities of AI-Generated Text Detection [76.55825911221434]
We argue that as machine-generated text approximates human-like quality, the sample size needed for detection bounds increases.
We test various state-of-the-art text generators, including GPT-2, GPT-3.5-Turbo, Llama, Llama-2-13B-Chat-HF, and Llama-2-70B-Chat-HF, against detectors, including oBERTa-Large/Base-Detector, GPTZero.
arXiv Detail & Related papers (2023-04-10T17:47:39Z) - To ChatGPT, or not to ChatGPT: That is the question! [78.407861566006]
This study provides a comprehensive and contemporary assessment of the most recent techniques in ChatGPT detection.
We have curated a benchmark dataset consisting of prompts from ChatGPT and humans, including diverse questions from medical, open Q&A, and finance domains.
Our evaluation results demonstrate that none of the existing methods can effectively detect ChatGPT-generated content.
arXiv Detail & Related papers (2023-04-04T03:04:28Z) - TextFlint: Unified Multilingual Robustness Evaluation Toolkit for
Natural Language Processing [73.16475763422446]
We propose a multilingual robustness evaluation platform for NLP tasks (TextFlint)
It incorporates universal text transformation, task-specific transformation, adversarial attack, subpopulation, and their combinations to provide comprehensive robustness analysis.
TextFlint generates complete analytical reports as well as targeted augmented data to address the shortcomings of the model's robustness.
arXiv Detail & Related papers (2021-03-21T17:20:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.