Exploring the Limitations of Detecting Machine-Generated Text
- URL: http://arxiv.org/abs/2406.11073v1
- Date: Sun, 16 Jun 2024 21:02:02 GMT
- Title: Exploring the Limitations of Detecting Machine-Generated Text
- Authors: Jad Doughman, Osama Mohammed Afzal, Hawau Olamide Toyin, Shady Shehata, Preslav Nakov, Zeerak Talat,
- Abstract summary: We critically examine the classification performance for detecting machine-generated text by evaluating on texts with varying writing styles.
We find that classifiers are highly sensitive to stylistic changes and differences in text complexity.
We further find that detection systems are particularly susceptible to misclassify easy-to-read texts while they have high performance for complex texts.
- Score: 29.06307663406079
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent improvements in the quality of the generations by large language models have spurred research into identifying machine-generated text. Systems proposed for the task often achieve high performance. However, humans and machines can produce text in different styles and in different domains, and it remains unclear whether machine generated-text detection models favour particular styles or domains. In this paper, we critically examine the classification performance for detecting machine-generated text by evaluating on texts with varying writing styles. We find that classifiers are highly sensitive to stylistic changes and differences in text complexity, and in some cases degrade entirely to random classifiers. We further find that detection systems are particularly susceptible to misclassify easy-to-read texts while they have high performance for complex texts.
Related papers
- Towards Unified Multi-granularity Text Detection with Interactive Attention [56.79437272168507]
"Detect Any Text" is an advanced paradigm that unifies scene text detection, layout analysis, and document page detection into a cohesive, end-to-end model.
A pivotal innovation in DAT is the across-granularity interactive attention module, which significantly enhances the representation learning of text instances.
Tests demonstrate that DAT achieves state-of-the-art performances across a variety of text-related benchmarks.
arXiv Detail & Related papers (2024-05-30T07:25:23Z) - Deciphering Textual Authenticity: A Generalized Strategy through the Lens of Large Language Semantics for Detecting Human vs. Machine-Generated Text [8.290557547578146]
We introduce a novel system, T5LLMCipher, for detecting machine-generated text using a pretrained T5 encoder combined with LLM embedding sub-clustering.
We find that our approach provides state-of-the-art generalization ability, with an average increase in F1 score on machine-generated text of 19.6% on unseen generators and domains.
arXiv Detail & Related papers (2024-01-17T18:45:13Z) - Assaying on the Robustness of Zero-Shot Machine-Generated Text Detectors [57.7003399760813]
We explore advanced Large Language Models (LLMs) and their specialized variants, contributing to this field in several ways.
We uncover a significant correlation between topics and detection performance.
These investigations shed light on the adaptability and robustness of these detection methods across diverse topics.
arXiv Detail & Related papers (2023-12-20T10:53:53Z) - Deep dive into language traits of AI-generated Abstracts [5.209583971923267]
Generative language models, such as ChatGPT, have garnered attention for their ability to generate human-like writing.
In this work, we attempt to detect the Abstracts generated by ChatGPT, which are much shorter in length and bounded.
We extract the texts semantic and lexical properties and observe that traditional machine learning models can confidently detect these Abstracts.
arXiv Detail & Related papers (2023-12-17T06:03:33Z) - IMGTB: A Framework for Machine-Generated Text Detection Benchmarking [0.0]
We present the IMGTB framework, which simplifies the benchmarking of machine-generated text detection methods.
The default set of analyses, metrics and visualizations offered by the tool follows the established practices of machine-generated text detection benchmarking.
arXiv Detail & Related papers (2023-11-21T12:40:01Z) - AI-generated text boundary detection with RoFT [7.2286849324485445]
We study how to detect the boundary between human-written and machine-generated parts of texts.
In particular, we find that perplexity-based approaches to boundary detection tend to be more robust to peculiarities of domain-specific data than supervised fine-tuning of the RoBERTa model.
arXiv Detail & Related papers (2023-11-14T17:48:19Z) - MAGE: Machine-generated Text Detection in the Wild [82.70561073277801]
Large language models (LLMs) have achieved human-level text generation, emphasizing the need for effective AI-generated text detection.
We build a comprehensive testbed by gathering texts from diverse human writings and texts generated by different LLMs.
Despite challenges, the top-performing detector can identify 86.54% out-of-domain texts generated by a new LLM, indicating the feasibility for application scenarios.
arXiv Detail & Related papers (2023-05-22T17:13:29Z) - DPIC: Decoupling Prompt and Intrinsic Characteristics for LLM Generated Text Detection [56.513637720967566]
Large language models (LLMs) can generate texts that pose risks of misuse, such as plagiarism, planting fake reviews on e-commerce platforms, or creating inflammatory false tweets.
Existing high-quality detection methods usually require access to the interior of the model to extract the intrinsic characteristics.
We propose to extract deep intrinsic characteristics of the black-box model generated texts.
arXiv Detail & Related papers (2023-05-21T17:26:16Z) - On the Possibilities of AI-Generated Text Detection [76.55825911221434]
We argue that as machine-generated text approximates human-like quality, the sample size needed for detection bounds increases.
We test various state-of-the-art text generators, including GPT-2, GPT-3.5-Turbo, Llama, Llama-2-13B-Chat-HF, and Llama-2-70B-Chat-HF, against detectors, including oBERTa-Large/Base-Detector, GPTZero.
arXiv Detail & Related papers (2023-04-10T17:47:39Z) - MOST: A Multi-Oriented Scene Text Detector with Localization Refinement [67.35280008722255]
We propose a new algorithm for scene text detection, which puts forward a set of strategies to significantly improve the quality of text localization.
Specifically, a Text Feature Alignment Module (TFAM) is proposed to dynamically adjust the receptive fields of features.
A Position-Aware Non-Maximum Suppression (PA-NMS) module is devised to exclude unreliable ones.
arXiv Detail & Related papers (2021-04-02T14:34:41Z) - RoFT: A Tool for Evaluating Human Detection of Machine-Generated Text [25.80571756447762]
We present Real or Fake Text (RoFT), a website that invites users to try their hand at detecting machine-generated text.
We show preliminary results of using RoFT to evaluate detection of machine-generated news articles.
arXiv Detail & Related papers (2020-10-06T22:47:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.