ChatGPT as the Transportation Equity Information Source for Scientific
Writing
- URL: http://arxiv.org/abs/2303.11158v1
- Date: Fri, 10 Mar 2023 16:21:54 GMT
- Title: ChatGPT as the Transportation Equity Information Source for Scientific
Writing
- Authors: Boniphace Kutela, Shoujia Li, Subasish Das, and Jinli Liu
- Abstract summary: This study explored the content and usefulness of ChatGPT-generated information related to transportation equity.
It utilized 152 papers retrieved through the Web of Science (WoS) repository.
The results indicate that a weak similarity between ChatGPT and human-written abstracts.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Transportation equity is an interdisciplinary agenda that requires both
transportation and social inputs. Traditionally, transportation equity
information are sources from public libraries, conferences, televisions, social
media, among other. Artificial intelligence (AI) tools including advanced
language models such as ChatGPT are becoming favorite information sources.
However, their credibility has not been well explored. This study explored the
content and usefulness of ChatGPT-generated information related to
transportation equity. It utilized 152 papers retrieved through the Web of
Science (WoS) repository. The prompt was crafted for ChatGPT to provide an
abstract given the title of the paper. The ChatGPT-based abstracts were then
compared to human-written abstracts using statistical tools and unsupervised
text mining. The results indicate that a weak similarity between ChatGPT and
human-written abstracts. On average, the human-written abstracts and ChatGPT
generated abstracts were about 58% similar, with a maximum and minimum of 97%
and 1.4%, respectively. The keywords from the abstracts of papers with over the
mean similarity score were more likely to be similar whereas those from below
the average score were less likely to be similar. Themes with high similarity
scores include access, public transit, and policy, among others. Further, clear
differences in the key pattern of clusters for high and low similarity score
abstracts was observed. Contrarily, the findings from collocated keywords were
inconclusive. The study findings suggest that ChatGPT has the potential to be a
source of transportation equity information. However, currently, a great amount
of attention is needed before a user can utilize materials from ChatGPT
Related papers
- Primacy Effect of ChatGPT [69.49920102917598]
We study the primacy effect of ChatGPT: the tendency of selecting the labels at earlier positions as the answer.
We hope that our experiments and analyses provide additional insights into building more reliable ChatGPT-based solutions.
arXiv Detail & Related papers (2023-10-20T00:37:28Z) - Chatbot-supported Thesis Writing: An Autoethnographic Report [0.0]
ChatGPT might be applied to formats that require learners to generate text, such as bachelor theses or student research papers.
ChatGPT is to be valued as a beneficial tool in thesis writing.
However, writing a conclusive thesis still requires the learner's meaningful engagement.
arXiv Detail & Related papers (2023-10-14T09:09:26Z) - Playing with words: Comparing the vocabulary and lexical diversity of ChatGPT and humans [3.297182592932918]
generative language models such as ChatGPT have triggered a revolution that can transform how text is generated.
Will the use of tools such as ChatGPT increase or reduce the vocabulary used or the lexical richness?
This has implications for words, as those not included in AI-generated content will tend to be less and less popular and may eventually be lost.
arXiv Detail & Related papers (2023-08-14T21:19:44Z) - What has ChatGPT read? The origins of archaeological citations used by a
generative artificial intelligence application [0.0]
This paper tested what archaeological literature appears to have been included in ChatGPT's training phase.
While ChatGPT offered seemingly pertinent references, a large percentage proved to be fictitious.
It can be shown that all references provided by ChatGPT that were found to be genuine have also been cited on Wikipedia pages.
arXiv Detail & Related papers (2023-08-07T05:06:35Z) - Is ChatGPT Involved in Texts? Measure the Polish Ratio to Detect
ChatGPT-Generated Text [48.36706154871577]
We introduce a novel dataset termed HPPT (ChatGPT-polished academic abstracts)
It diverges from extant corpora by comprising pairs of human-written and ChatGPT-polished abstracts instead of purely ChatGPT-generated texts.
We also propose the "Polish Ratio" method, an innovative measure of the degree of modification made by ChatGPT compared to the original human-written text.
arXiv Detail & Related papers (2023-07-21T06:38:37Z) - CHEAT: A Large-scale Dataset for Detecting ChatGPT-writtEn AbsTracts [10.034193809833372]
Malicious users could synthesize dummy academic content through ChatGPT.
We present a large-scale CHatGPT-writtEn AbsTract dataset (CHEAT) to support the development of detection algorithms.
arXiv Detail & Related papers (2023-04-24T11:19:33Z) - ChatGPT Beyond English: Towards a Comprehensive Evaluation of Large
Language Models in Multilingual Learning [70.57126720079971]
Large language models (LLMs) have emerged as the most important breakthroughs in natural language processing (NLP)
This paper evaluates ChatGPT on 7 different tasks, covering 37 diverse languages with high, medium, low, and extremely low resources.
Compared to the performance of previous models, our extensive experimental results demonstrate a worse performance of ChatGPT for different NLP tasks and languages.
arXiv Detail & Related papers (2023-04-12T05:08:52Z) - To ChatGPT, or not to ChatGPT: That is the question! [78.407861566006]
This study provides a comprehensive and contemporary assessment of the most recent techniques in ChatGPT detection.
We have curated a benchmark dataset consisting of prompts from ChatGPT and humans, including diverse questions from medical, open Q&A, and finance domains.
Our evaluation results demonstrate that none of the existing methods can effectively detect ChatGPT-generated content.
arXiv Detail & Related papers (2023-04-04T03:04:28Z) - Comparing Abstractive Summaries Generated by ChatGPT to Real Summaries
Through Blinded Reviewers and Text Classification Algorithms [0.8339831319589133]
ChatGPT, developed by OpenAI, is a recent addition to the family of language models.
We evaluate the performance of ChatGPT on Abstractive Summarization by the means of automated metrics and blinded human reviewers.
arXiv Detail & Related papers (2023-03-30T18:28:33Z) - Can ChatGPT Understand Too? A Comparative Study on ChatGPT and
Fine-tuned BERT [103.57103957631067]
ChatGPT has attracted great attention, as it can generate fluent and high-quality responses to human inquiries.
We evaluate ChatGPT's understanding ability by evaluating it on the most popular GLUE benchmark, and comparing it with 4 representative fine-tuned BERT-style models.
We find that: 1) ChatGPT falls short in handling paraphrase and similarity tasks; 2) ChatGPT outperforms all BERT models on inference tasks by a large margin; 3) ChatGPT achieves comparable performance compared with BERT on sentiment analysis and question answering tasks.
arXiv Detail & Related papers (2023-02-19T12:29:33Z) - Exploring the Limits of ChatGPT for Query or Aspect-based Text
Summarization [28.104696513516117]
Large language models (LLMs) like GPT3 and ChatGPT have recently created significant interest in using these models for text summarization tasks.
Recent studies citegoyal2022news, zhang2023benchmarking have shown that LLMs-generated news summaries are already on par with humans.
Our experiments reveal that ChatGPT's performance is comparable to traditional fine-tuning methods in terms of Rouge scores.
arXiv Detail & Related papers (2023-02-16T04:41:30Z) - A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on
Reasoning, Hallucination, and Interactivity [79.12003701981092]
We carry out an extensive technical evaluation of ChatGPT using 23 data sets covering 8 different common NLP application tasks.
We evaluate the multitask, multilingual and multi-modal aspects of ChatGPT based on these data sets and a newly designed multimodal dataset.
ChatGPT is 63.41% accurate on average in 10 different reasoning categories under logical reasoning, non-textual reasoning, and commonsense reasoning.
arXiv Detail & Related papers (2023-02-08T12:35:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.