The Turing Deception
- URL: http://arxiv.org/abs/2212.06721v1
- Date: Fri, 9 Dec 2022 16:32:11 GMT
- Title: The Turing Deception
- Authors: David Noever, Matt Ciolino
- Abstract summary: This research revisits the classic Turing test and compares recent large language models such as ChatGPT.
The question of whether an algorithm displays hints of Turing's truly original thoughts remains unanswered and potentially unanswerable for now.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This research revisits the classic Turing test and compares recent large
language models such as ChatGPT for their abilities to reproduce human-level
comprehension and compelling text generation. Two task challenges --
summarization, and question answering -- prompt ChatGPT to produce original
content (98-99%) from a single text entry and also sequential questions
originally posed by Turing in 1950. The question of a machine fooling a human
judge recedes in this work relative to the question of "how would one prove
it?" The original contribution of the work presents a metric and simple
grammatical set for understanding the writing mechanics of chatbots in
evaluating their readability and statistical clarity, engagement, delivery, and
overall quality. While Turing's original prose scores at least 14% below the
machine-generated output, the question of whether an algorithm displays hints
of Turing's truly original thoughts (the "Lovelace 2.0" test) remains
unanswered and potentially unanswerable for now.
Related papers
- Self-Directed Turing Test for Large Language Models [56.64615470513102]
The Turing test examines whether AIs can exhibit human-like behaviour in natural language conversations.
Traditional Turing tests adopt a rigid dialogue format where each participant sends only one message each time.
This paper proposes the Self-Directed Turing Test, which extends the original test with a burst dialogue format.
arXiv Detail & Related papers (2024-08-19T09:57:28Z) - Deep Learning Based Amharic Chatbot for FAQs in Universities [0.0]
This paper proposes a model that answers frequently asked questions (FAQs) in the Amharic language.
The proposed program employs tokenization, stop word removal, and stemming to analyze and categorize Amharic input sentences.
The model was integrated with Facebook Messenger and deployed on a Heroku server for 24-hour accessibility.
arXiv Detail & Related papers (2024-01-26T18:37:21Z) - Turing's Test, a Beautiful Thought Experiment [0.0]
There has been a resurgence of claims and questions about the Turing test and its value.
If AI were quantum physics, by now several "Schr"odinger's" cats would have been killed.
This paper presents a wealth of evidence, including new archival sources, and gives original answers to several open questions about Turing's 1950 paper.
arXiv Detail & Related papers (2023-12-18T19:38:26Z) - HPE:Answering Complex Questions over Text by Hybrid Question Parsing and
Execution [92.69684305578957]
We propose a framework of question parsing and execution on textual QA.
The proposed framework can be viewed as a top-down question parsing followed by a bottom-up answer backtracking.
Our experiments on MuSiQue, 2WikiQA, HotpotQA, and NQ show that the proposed parsing and hybrid execution framework outperforms existing approaches in supervised, few-shot, and zero-shot settings.
arXiv Detail & Related papers (2023-05-12T22:37:06Z) - The Human-or-Machine Matter: Turing-Inspired Reflections on an Everyday
Issue [4.309879785418976]
We sidestep the question of whether a machine can be labeled intelligent, or can be said to match human capabilities in a given context.
We first draw attention to the seemingly simpler question a person may ask themselves in an everyday interaction: Am I interacting with a human or with a machine?''
arXiv Detail & Related papers (2023-05-07T15:41:11Z) - Is ChatGPT a General-Purpose Natural Language Processing Task Solver? [113.22611481694825]
Large language models (LLMs) have demonstrated the ability to perform a variety of natural language processing (NLP) tasks zero-shot.
Recently, the debut of ChatGPT has drawn a great deal of attention from the natural language processing (NLP) community.
It is not yet known whether ChatGPT can serve as a generalist model that can perform many NLP tasks zero-shot.
arXiv Detail & Related papers (2023-02-08T09:44:51Z) - Understanding Unnatural Questions Improves Reasoning over Text [54.235828149899625]
Complex question answering (CQA) over raw text is a challenging task.
Learning an effective CQA model requires large amounts of human-annotated data.
We address the challenge of learning a high-quality programmer (parser) by projecting natural human-generated questions into unnatural machine-generated questions.
arXiv Detail & Related papers (2020-10-19T10:22:16Z) - TextHide: Tackling Data Privacy in Language Understanding Tasks [54.11691303032022]
TextHide mitigates privacy risks without slowing down training or reducing accuracy.
It requires all participants to add a simple encryption step to prevent an eavesdropping attacker from recovering private text data.
We evaluate TextHide on the GLUE benchmark, and our experiments show that TextHide can effectively defend attacks on shared gradients or representations.
arXiv Detail & Related papers (2020-10-12T22:22:15Z) - Inquisitive Question Generation for High Level Text Comprehension [60.21497846332531]
We introduce INQUISITIVE, a dataset of 19K questions that are elicited while a person is reading through a document.
We show that readers engage in a series of pragmatic strategies to seek information.
We evaluate question generation models based on GPT-2 and show that our model is able to generate reasonable questions.
arXiv Detail & Related papers (2020-10-04T19:03:39Z) - Asking Questions the Human Way: Scalable Question-Answer Generation from
Text Corpus [23.676748207014903]
We propose Answer-Clue-Style-aware Question Generation (ACS-QG)
It aims at automatically generating high-quality and diverse question-answer pairs from unlabeled text corpus at scale.
We can generate 2.8 million quality-assured question-answer pairs from a million sentences found in Wikipedia.
arXiv Detail & Related papers (2020-01-27T05:27:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.