Does Human Collaboration Enhance the Accuracy of Identifying
LLM-Generated Deepfake Texts?
- URL: http://arxiv.org/abs/2304.01002v3
- Date: Mon, 9 Oct 2023 21:57:47 GMT
- Title: Does Human Collaboration Enhance the Accuracy of Identifying
LLM-Generated Deepfake Texts?
- Authors: Adaku Uchendu, Jooyoung Lee, Hua Shen, Thai Le, Ting-Hao 'Kenneth'
Huang, Dongwon Lee
- Abstract summary: Collaboration among humans can potentially improve the detection of deepfake texts.
The strongest indicator of deepfake texts is their lack of coherence and consistency.
- Score: 27.700129124128747
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Advances in Large Language Models (e.g., GPT-4, LLaMA) have improved the
generation of coherent sentences resembling human writing on a large scale,
resulting in the creation of so-called deepfake texts. However, this progress
poses security and privacy concerns, necessitating effective solutions for
distinguishing deepfake texts from human-written ones. Although prior works
studied humans' ability to detect deepfake texts, none has examined whether
"collaboration" among humans improves the detection of deepfake texts. In this
study, to address this gap of understanding on deepfake texts, we conducted
experiments with two groups: (1) nonexpert individuals from the AMT platform
and (2) writing experts from the Upwork platform. The results demonstrate that
collaboration among humans can potentially improve the detection of deepfake
texts for both groups, increasing detection accuracies by 6.36% for non-experts
and 12.76% for experts, respectively, compared to individuals' detection
accuracies. We further analyze the explanations that humans used for detecting
a piece of text as deepfake text, and find that the strongest indicator of
deepfake texts is their lack of coherence and consistency. Our study provides
useful insights for future tools and framework designs to facilitate the
collaborative human detection of deepfake texts. The experiment datasets and
AMT implementations are available at:
https://github.com/huashen218/llm-deepfake-human-study.git
Related papers
- Human Texts Are Outliers: Detecting LLM-generated Texts via Out-of-distribution Detection [71.59834293521074]
We develop a framework to distinguish between human-authored and machine-generated text.<n>Our method achieves 98.3% AUROC and AUPR with only 8.9% FPR95 on DeepFake dataset.<n>Code, pretrained weights, and demo will be released.
arXiv Detail & Related papers (2025-10-07T08:14:45Z) - ExaGPT: Example-Based Machine-Generated Text Detection for Human Interpretability [62.285407189502216]
Detecting texts generated by Large Language Models (LLMs) could cause grave mistakes due to incorrect decisions.
We introduce ExaGPT, an interpretable detection approach grounded in the human decision-making process.
We show that ExaGPT massively outperforms prior powerful detectors by up to +40.9 points of accuracy at a false positive rate of 1%.
arXiv Detail & Related papers (2025-02-17T01:15:07Z) - Beyond checkmate: exploring the creative chokepoints in AI text [5.427864472511595]
Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP) and Artificial Intelligence (AI)
Our study investigates the nuanced distinctions between human and AI texts across text segments.
Our research can shed light on the intricacies of human-AI text distinctions, offering novel insights for text detection and understanding.
arXiv Detail & Related papers (2025-01-31T16:57:01Z) - Detecting Machine-Generated Long-Form Content with Latent-Space Variables [54.07946647012579]
Existing zero-shot detectors primarily focus on token-level distributions, which are vulnerable to real-world domain shifts.
We propose a more robust method that incorporates abstract elements, such as event transitions, as key deciding factors to detect machine versus human texts.
arXiv Detail & Related papers (2024-10-04T18:42:09Z) - Spotting AI's Touch: Identifying LLM-Paraphrased Spans in Text [61.22649031769564]
We propose a novel framework, paraphrased text span detection (PTD)
PTD aims to identify paraphrased text spans within a text.
We construct a dedicated dataset, PASTED, for paraphrased text span detection.
arXiv Detail & Related papers (2024-05-21T11:22:27Z) - Text Grouping Adapter: Adapting Pre-trained Text Detector for Layout Analysis [52.34110239735265]
We present Text Grouping Adapter (TGA), a module that can enable the utilization of various pre-trained text detectors to learn layout analysis.
Our comprehensive experiments demonstrate that, even with frozen pre-trained models, incorporating our TGA into various pre-trained text detectors and text spotters can achieve superior layout analysis performance.
arXiv Detail & Related papers (2024-05-13T05:48:35Z) - Enhancing Scene Text Detectors with Realistic Text Image Synthesis Using
Diffusion Models [63.99110667987318]
We present DiffText, a pipeline that seamlessly blends foreground text with the background's intrinsic features.
With fewer text instances, our produced text images consistently surpass other synthetic data in aiding text detectors.
arXiv Detail & Related papers (2023-11-28T06:51:28Z) - DetectGPT-SC: Improving Detection of Text Generated by Large Language
Models through Self-Consistency with Masked Predictions [13.077729125193434]
Existing detectors are built on the assumption that there is a distribution gap between human-generated and AI-generated texts.
We find that large language models such as ChatGPT exhibit strong self-consistency in text generation and continuation.
We propose a new method for AI-generated texts detection based on self-consistency with masked predictions.
arXiv Detail & Related papers (2023-10-23T01:23:10Z) - MAGE: Machine-generated Text Detection in the Wild [82.70561073277801]
Large language models (LLMs) have achieved human-level text generation, emphasizing the need for effective AI-generated text detection.
We build a comprehensive testbed by gathering texts from diverse human writings and texts generated by different LLMs.
Despite challenges, the top-performing detector can identify 86.54% out-of-domain texts generated by a new LLM, indicating the feasibility for application scenarios.
arXiv Detail & Related papers (2023-05-22T17:13:29Z) - On the Possibilities of AI-Generated Text Detection [76.55825911221434]
We argue that as machine-generated text approximates human-like quality, the sample size needed for detection bounds increases.
We test various state-of-the-art text generators, including GPT-2, GPT-3.5-Turbo, Llama, Llama-2-13B-Chat-HF, and Llama-2-70B-Chat-HF, against detectors, including oBERTa-Large/Base-Detector, GPTZero.
arXiv Detail & Related papers (2023-04-10T17:47:39Z) - Real or Fake Text?: Investigating Human Ability to Detect Boundaries
Between Human-Written and Machine-Generated Text [23.622347443796183]
We study a more realistic setting where text begins as human-written and transitions to being generated by state-of-the-art neural language models.
We show that, while annotators often struggle at this task, there is substantial variance in annotator skill and that given proper incentives, annotators can improve at this task over time.
arXiv Detail & Related papers (2022-12-24T06:40:25Z) - Deepfake Text Detection: Limitations and Opportunities [4.283184763765838]
We collect deepfake text from 4 online services powered by Transformer-based tools to evaluate the generalization ability of the defenses on content in the wild.
We develop several low-cost adversarial attacks, and investigate the robustness of existing defenses against an adaptive attacker.
Our evaluation shows that tapping into the semantic information in the text content is a promising approach for improving the robustness and generalization performance of deepfake text detection schemes.
arXiv Detail & Related papers (2022-10-17T20:40:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.