DNA-GPT: Divergent N-Gram Analysis for Training-Free Detection of
GPT-Generated Text
- URL: http://arxiv.org/abs/2305.17359v2
- Date: Wed, 4 Oct 2023 16:36:09 GMT
- Title: DNA-GPT: Divergent N-Gram Analysis for Training-Free Detection of
GPT-Generated Text
- Authors: Xianjun Yang, Wei Cheng, Yue Wu, Linda Petzold, William Yang Wang,
Haifeng Chen
- Abstract summary: We propose a novel training-free detection strategy called Divergent N-Gram Analysis (DNA-GPT)
By analyzing the differences between the original and new remaining parts through N-gram analysis, we unveil significant discrepancies between the distribution of machine-generated text and human-written text.
Results show that our zero-shot approach exhibits state-of-the-art performance in distinguishing between human and GPT-generated text.
- Score: 82.5469544192645
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Large language models (LLMs) have notably enhanced the fluency and diversity
of machine-generated text. However, this progress also presents a significant
challenge in detecting the origin of a given text, and current research on
detection methods lags behind the rapid evolution of LLMs. Conventional
training-based methods have limitations in flexibility, particularly when
adapting to new domains, and they often lack explanatory power. To address this
gap, we propose a novel training-free detection strategy called Divergent
N-Gram Analysis (DNA-GPT). Given a text, we first truncate it in the middle and
then use only the preceding portion as input to the LLMs to regenerate the new
remaining parts. By analyzing the differences between the original and new
remaining parts through N-gram analysis in black-box or probability divergence
in white-box, we unveil significant discrepancies between the distribution of
machine-generated text and the distribution of human-written text. We conducted
extensive experiments on the most advanced LLMs from OpenAI, including
text-davinci-003, GPT-3.5-turbo, and GPT-4, as well as open-source models such
as GPT-NeoX-20B and LLaMa-13B. Results show that our zero-shot approach
exhibits state-of-the-art performance in distinguishing between human and
GPT-generated text on four English and one German dataset, outperforming
OpenAI's own classifier, which is trained on millions of text. Additionally,
our methods provide reasonable explanations and evidence to support our claim,
which is a unique feature of explainable detection. Our method is also robust
under the revised text attack and can additionally solve model sourcing. Codes
are available at https://github.com/Xianjun-Yang/DNA-GPT.
Related papers
- Detecting Machine-Generated Long-Form Content with Latent-Space Variables [54.07946647012579]
Existing zero-shot detectors primarily focus on token-level distributions, which are vulnerable to real-world domain shifts.
We propose a more robust method that incorporates abstract elements, such as event transitions, as key deciding factors to detect machine versus human texts.
arXiv Detail & Related papers (2024-10-04T18:42:09Z) - Pretraining Data Detection for Large Language Models: A Divergence-based Calibration Method [108.56493934296687]
We introduce a divergence-based calibration method, inspired by the divergence-from-randomness concept, to calibrate token probabilities for pretraining data detection.
We have developed a Chinese-language benchmark, PatentMIA, to assess the performance of detection approaches for LLMs on Chinese text.
arXiv Detail & Related papers (2024-09-23T07:55:35Z) - ESPERANTO: Evaluating Synthesized Phrases to Enhance Robustness in AI Detection for Text Origination [1.8418334324753884]
This paper introduces back-translation as a novel technique for evading detection.
We present a model that combines these back-translated texts to produce a manipulated version of the original AI-generated text.
We evaluate this technique on nine AI detectors, including six open-source and three proprietary systems.
arXiv Detail & Related papers (2024-09-22T01:13:22Z) - DetectGPT-SC: Improving Detection of Text Generated by Large Language
Models through Self-Consistency with Masked Predictions [13.077729125193434]
Existing detectors are built on the assumption that there is a distribution gap between human-generated and AI-generated texts.
We find that large language models such as ChatGPT exhibit strong self-consistency in text generation and continuation.
We propose a new method for AI-generated texts detection based on self-consistency with masked predictions.
arXiv Detail & Related papers (2023-10-23T01:23:10Z) - GPT-who: An Information Density-based Machine-Generated Text Detector [6.111161457447324]
We propose GPT-who, the first psycholinguistically-inspired domain-agnostic statistical detector.
This detector employs UID-based features to model the unique statistical signature of each Large Language Models (LLMs)-generated and human-generated texts.
We find that GPT-who can distinguish texts generated by very sophisticated LLMs, even when the overlying text is indiscernible.
arXiv Detail & Related papers (2023-10-09T23:06:05Z) - Multiscale Positive-Unlabeled Detection of AI-Generated Texts [27.956604193427772]
Multiscale Positive-Unlabeled (MPU) training framework is proposed to address the difficulty of short-text detection.
MPU method augments detection performance on long AI-generated texts, and significantly improves short-text detection of language model detectors.
arXiv Detail & Related papers (2023-05-29T15:25:00Z) - MAGE: Machine-generated Text Detection in the Wild [82.70561073277801]
Large language models (LLMs) have achieved human-level text generation, emphasizing the need for effective AI-generated text detection.
We build a comprehensive testbed by gathering texts from diverse human writings and texts generated by different LLMs.
Despite challenges, the top-performing detector can identify 86.54% out-of-domain texts generated by a new LLM, indicating the feasibility for application scenarios.
arXiv Detail & Related papers (2023-05-22T17:13:29Z) - On the Possibilities of AI-Generated Text Detection [76.55825911221434]
We argue that as machine-generated text approximates human-like quality, the sample size needed for detection bounds increases.
We test various state-of-the-art text generators, including GPT-2, GPT-3.5-Turbo, Llama, Llama-2-13B-Chat-HF, and Llama-2-70B-Chat-HF, against detectors, including oBERTa-Large/Base-Detector, GPTZero.
arXiv Detail & Related papers (2023-04-10T17:47:39Z) - How much do language models copy from their training data? Evaluating
linguistic novelty in text generation using RAVEN [63.79300884115027]
Current language models can generate high-quality text.
Are they simply copying text they have seen before, or have they learned generalizable linguistic abstractions?
We introduce RAVEN, a suite of analyses for assessing the novelty of generated text.
arXiv Detail & Related papers (2021-11-18T04:07:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.