Deepfake Text Detection: Limitations and Opportunities
- URL: http://arxiv.org/abs/2210.09421v1
- Date: Mon, 17 Oct 2022 20:40:14 GMT
- Title: Deepfake Text Detection: Limitations and Opportunities
- Authors: Jiameng Pu, Zain Sarwar, Sifat Muhammad Abdullah, Abdullah Rehman,
Yoonjin Kim, Parantapa Bhattacharya, Mobin Javed, Bimal Viswanath
- Abstract summary: We collect deepfake text from 4 online services powered by Transformer-based tools to evaluate the generalization ability of the defenses on content in the wild.
We develop several low-cost adversarial attacks, and investigate the robustness of existing defenses against an adaptive attacker.
Our evaluation shows that tapping into the semantic information in the text content is a promising approach for improving the robustness and generalization performance of deepfake text detection schemes.
- Score: 4.283184763765838
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in generative models for language have enabled the creation
of convincing synthetic text or deepfake text. Prior work has demonstrated the
potential for misuse of deepfake text to mislead content consumers. Therefore,
deepfake text detection, the task of discriminating between human and
machine-generated text, is becoming increasingly critical. Several defenses
have been proposed for deepfake text detection. However, we lack a thorough
understanding of their real-world applicability. In this paper, we collect
deepfake text from 4 online services powered by Transformer-based tools to
evaluate the generalization ability of the defenses on content in the wild. We
develop several low-cost adversarial attacks, and investigate the robustness of
existing defenses against an adaptive attacker. We find that many defenses show
significant degradation in performance under our evaluation scenarios compared
to their original claimed performance. Our evaluation shows that tapping into
the semantic information in the text content is a promising approach for
improving the robustness and generalization performance of deepfake text
detection schemes.
Related papers
- Diversity Boosts AI-Generated Text Detection [51.56484100374058]
DivEye is a novel framework that captures how unpredictability fluctuates across a text using surprisal-based features.<n>Our method outperforms existing zero-shot detectors by up to 33.2% and achieves competitive performance with fine-tuned baselines.
arXiv Detail & Related papers (2025-09-23T10:21:22Z) - Empowering Morphing Attack Detection using Interpretable Image-Text Foundation Model [3.013675405024281]
We present a multimodal learning approach that can provide a textual description of morphing attack detection.<n>We first show that zero-shot evaluation of the proposed framework can yield not only generalizable morphing attack detection, but also predict the most relevant text snippet.
arXiv Detail & Related papers (2025-08-13T18:06:29Z) - Detecting Machine-Generated Long-Form Content with Latent-Space Variables [54.07946647012579]
Existing zero-shot detectors primarily focus on token-level distributions, which are vulnerable to real-world domain shifts.
We propose a more robust method that incorporates abstract elements, such as event transitions, as key deciding factors to detect machine versus human texts.
arXiv Detail & Related papers (2024-10-04T18:42:09Z) - ESPERANTO: Evaluating Synthesized Phrases to Enhance Robustness in AI Detection for Text Origination [1.8418334324753884]
This paper introduces back-translation as a novel technique for evading detection.
We present a model that combines these back-translated texts to produce a manipulated version of the original AI-generated text.
We evaluate this technique on nine AI detectors, including six open-source and three proprietary systems.
arXiv Detail & Related papers (2024-09-22T01:13:22Z) - Spotting AI's Touch: Identifying LLM-Paraphrased Spans in Text [61.22649031769564]
We propose a novel framework, paraphrased text span detection (PTD)
PTD aims to identify paraphrased text spans within a text.
We construct a dedicated dataset, PASTED, for paraphrased text span detection.
arXiv Detail & Related papers (2024-05-21T11:22:27Z) - Humanizing Machine-Generated Content: Evading AI-Text Detection through Adversarial Attack [24.954755569786396]
We propose a framework for a broader class of adversarial attacks, designed to perform minor perturbations in machine-generated content to evade detection.
We consider two attack settings: white-box and black-box, and employ adversarial learning in dynamic scenarios to assess the potential enhancement of the current detection model's robustness.
The empirical results reveal that the current detection models can be compromised in as little as 10 seconds, leading to the misclassification of machine-generated text as human-written content.
arXiv Detail & Related papers (2024-04-02T12:49:22Z) - Enhancing Scene Text Detectors with Realistic Text Image Synthesis Using
Diffusion Models [63.99110667987318]
We present DiffText, a pipeline that seamlessly blends foreground text with the background's intrinsic features.
With fewer text instances, our produced text images consistently surpass other synthetic data in aiding text detectors.
arXiv Detail & Related papers (2023-11-28T06:51:28Z) - MAGE: Machine-generated Text Detection in the Wild [82.70561073277801]
Large language models (LLMs) have achieved human-level text generation, emphasizing the need for effective AI-generated text detection.
We build a comprehensive testbed by gathering texts from diverse human writings and texts generated by different LLMs.
Despite challenges, the top-performing detector can identify 86.54% out-of-domain texts generated by a new LLM, indicating the feasibility for application scenarios.
arXiv Detail & Related papers (2023-05-22T17:13:29Z) - DeepTextMark: A Deep Learning-Driven Text Watermarking Approach for
Identifying Large Language Model Generated Text [1.249418440326334]
The importance of discerning whether texts are human-authored or generated by Large Language Models has become paramount.
DeepTextMark offers a viable "add-on" solution to prevailing text generation frameworks, requiring no direct access or alterations to the underlying text generation mechanism.
Experimental evaluations underscore the high imperceptibility, elevated detection accuracy, augmented robustness, reliability, and swift execution of DeepTextMark.
arXiv Detail & Related papers (2023-05-09T21:31:07Z) - Does Human Collaboration Enhance the Accuracy of Identifying
LLM-Generated Deepfake Texts? [27.700129124128747]
Collaboration among humans can potentially improve the detection of deepfake texts.
The strongest indicator of deepfake texts is their lack of coherence and consistency.
arXiv Detail & Related papers (2023-04-03T14:06:47Z) - Can AI-Generated Text be Reliably Detected? [50.95804851595018]
Large Language Models (LLMs) perform impressively well in various applications.
The potential for misuse of these models in activities such as plagiarism, generating fake news, and spamming has raised concern about their responsible use.
We stress-test the robustness of these AI text detectors in the presence of an attacker.
arXiv Detail & Related papers (2023-03-17T17:53:19Z) - Verifying the Robustness of Automatic Credibility Assessment [50.55687778699995]
We show that meaning-preserving changes in input text can mislead the models.
We also introduce BODEGA: a benchmark for testing both victim models and attack methods on misinformation detection tasks.
Our experimental results show that modern large language models are often more vulnerable to attacks than previous, smaller solutions.
arXiv Detail & Related papers (2023-03-14T16:11:47Z) - MOST: A Multi-Oriented Scene Text Detector with Localization Refinement [67.35280008722255]
We propose a new algorithm for scene text detection, which puts forward a set of strategies to significantly improve the quality of text localization.
Specifically, a Text Feature Alignment Module (TFAM) is proposed to dynamically adjust the receptive fields of features.
A Position-Aware Non-Maximum Suppression (PA-NMS) module is devised to exclude unreliable ones.
arXiv Detail & Related papers (2021-04-02T14:34:41Z) - RoFT: A Tool for Evaluating Human Detection of Machine-Generated Text [25.80571756447762]
We present Real or Fake Text (RoFT), a website that invites users to try their hand at detecting machine-generated text.
We show preliminary results of using RoFT to evaluate detection of machine-generated news articles.
arXiv Detail & Related papers (2020-10-06T22:47:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.