Counter Turing Test CT^2: AI-Generated Text Detection is Not as Easy as
You May Think -- Introducing AI Detectability Index
- URL: http://arxiv.org/abs/2310.05030v2
- Date: Tue, 24 Oct 2023 00:36:29 GMT
- Title: Counter Turing Test CT^2: AI-Generated Text Detection is Not as Easy as
You May Think -- Introducing AI Detectability Index
- Authors: Megha Chakraborty, S.M Towhidul Islam Tonmoy, S M Mehedi Zaman, Krish
Sharma, Niyar R Barman, Chandan Gupta, Shreya Gautam, Tanay Kumar, Vinija
Jain, Aman Chadha, Amit P. Sheth, Amitava Das
- Abstract summary: AI-generated text detection (AGTD) has emerged as a topic that has already received immediate attention in research.
This paper introduces the Counter Turing Test (CT2), a benchmark consisting of techniques aiming to offer a comprehensive evaluation of the fragility of existing AGTD techniques.
- Score: 9.348082057533325
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: With the rise of prolific ChatGPT, the risk and consequences of AI-generated
text has increased alarmingly. To address the inevitable question of ownership
attribution for AI-generated artifacts, the US Copyright Office released a
statement stating that 'If a work's traditional elements of authorship were
produced by a machine, the work lacks human authorship and the Office will not
register it'. Furthermore, both the US and the EU governments have recently
drafted their initial proposals regarding the regulatory framework for AI.
Given this cynosural spotlight on generative AI, AI-generated text detection
(AGTD) has emerged as a topic that has already received immediate attention in
research, with some initial methods having been proposed, soon followed by
emergence of techniques to bypass detection. This paper introduces the Counter
Turing Test (CT^2), a benchmark consisting of techniques aiming to offer a
comprehensive evaluation of the robustness of existing AGTD techniques. Our
empirical findings unequivocally highlight the fragility of the proposed AGTD
methods under scrutiny. Amidst the extensive deliberations on policy-making for
regulating AI development, it is of utmost importance to assess the
detectability of content generated by LLMs. Thus, to establish a quantifiable
spectrum facilitating the evaluation and ranking of LLMs according to their
detectability levels, we propose the AI Detectability Index (ADI). We conduct a
thorough examination of 15 contemporary LLMs, empirically demonstrating that
larger LLMs tend to have a higher ADI, indicating they are less detectable
compared to smaller LLMs. We firmly believe that ADI holds significant value as
a tool for the wider NLP community, with the potential to serve as a rubric in
AI-related policy-making.
Related papers
- A Survey of AI-generated Text Forensic Systems: Detection, Attribution,
and Characterization [13.44566185792894]
AI-generated text forensics is an emerging field addressing the challenges of LLM misuses.
We introduce a detailed taxonomy, focusing on three primary pillars: detection, attribution, and characterization.
We explore available resources for AI-generated text forensics research and discuss the evolving challenges and future directions of forensic systems in an AI era.
arXiv Detail & Related papers (2024-03-02T09:39:13Z) - Towards Possibilities & Impossibilities of AI-generated Text Detection:
A Survey [97.33926242130732]
Large Language Models (LLMs) have revolutionized the domain of natural language processing (NLP) with remarkable capabilities of generating human-like text responses.
Despite these advancements, several works in the existing literature have raised serious concerns about the potential misuse of LLMs.
To address these concerns, a consensus among the research community is to develop algorithmic solutions to detect AI-generated text.
arXiv Detail & Related papers (2023-10-23T18:11:32Z) - SeqXGPT: Sentence-Level AI-Generated Text Detection [62.3792779440284]
We introduce a sentence-level detection challenge by synthesizing documents polished with large language models (LLMs)
We then propose textbfSequence textbfX (Check) textbfGPT, a novel method that utilizes log probability lists from white-box LLMs as features for sentence-level AIGT detection.
arXiv Detail & Related papers (2023-10-13T07:18:53Z) - Exploration with Principles for Diverse AI Supervision [88.61687950039662]
Training large transformers using next-token prediction has given rise to groundbreaking advancements in AI.
While this generative AI approach has produced impressive results, it heavily leans on human supervision.
This strong reliance on human oversight poses a significant hurdle to the advancement of AI innovation.
We propose a novel paradigm termed Exploratory AI (EAI) aimed at autonomously generating high-quality training data.
arXiv Detail & Related papers (2023-10-13T07:03:39Z) - Large Language Models can be Guided to Evade AI-Generated Text Detection [40.7707919628752]
Large language models (LLMs) have shown remarkable performance in various tasks and have been extensively utilized by the public.
We equip LLMs with prompts, rather than relying on an external paraphraser, to evaluate the vulnerability of these detectors.
We propose a novel Substitution-based In-Context example optimization method (SICO) to automatically construct prompts for evading the detectors.
arXiv Detail & Related papers (2023-05-18T10:03:25Z) - Principle-Driven Self-Alignment of Language Models from Scratch with
Minimal Human Supervision [84.31474052176343]
Recent AI-assistant agents, such as ChatGPT, rely on supervised fine-tuning (SFT) with human annotations and reinforcement learning from human feedback to align the output with human intentions.
This dependence can significantly constrain the true potential of AI-assistant agents due to the high cost of obtaining human supervision.
We propose a novel approach called SELF-ALIGN, which combines principle-driven reasoning and the generative power of LLMs for the self-alignment of AI agents with minimal human supervision.
arXiv Detail & Related papers (2023-05-04T17:59:28Z) - Can AI-Generated Text be Reliably Detected? [54.670136179857344]
Unregulated use of LLMs can potentially lead to malicious consequences such as plagiarism, generating fake news, spamming, etc.
Recent works attempt to tackle this problem either using certain model signatures present in the generated text outputs or by applying watermarking techniques.
In this paper, we show that these detectors are not reliable in practical scenarios.
arXiv Detail & Related papers (2023-03-17T17:53:19Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.