Waiting, Banning, and Embracing: An Empirical Analysis of Adapting
Policies for Generative AI in Higher Education
- URL: http://arxiv.org/abs/2305.18617v1
- Date: Thu, 25 May 2023 02:01:56 GMT
- Title: Waiting, Banning, and Embracing: An Empirical Analysis of Adapting
Policies for Generative AI in Higher Education
- Authors: Ping Xiao, Yuanyuan Chen, and Weining Bao
- Abstract summary: This study aims to understand how universities establish policies regarding the use of AI tools.
We analyzed the top 500 universities according to the 2022 World University Rankings.
- Score: 7.623773809868841
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Generative AI tools such as ChatGPT have recently gained significant
attention in higher education. This study aims to understand how universities
establish policies regarding the use of AI tools and explore the factors that
influence their decisions. Our study examines ChatGPT policies implemented at
universities around the world, including their existence, content, and issuance
dates. Specifically, we analyzed the top 500 universities according to the 2022
QS World University Rankings. Our findings indicate that there is significant
variation in university policies. Less than one-third of the universities
included in the study had implemented ChatGPT policies. Of the universities
with ChatGPT policies, approximately 67 percent embraced ChatGPT in teaching
and learning, more than twice the number of universities that banned it. The
majority of the universities that ban the use of ChatGPT in assessments allow
individual instructors to deviate from this restrictive policy. Our empirical
analysis identifies several factors that are significantly and positively
correlated with a university's likelihood of having a ChatGPT policy, including
the university's academic reputation score, being in an English-speaking
country, and the general public attitudes toward ChatGPT. In addition, we found
that a university's likelihood of having a ban policy is positively associated
with faculty student ratio, citations, and the English-speaking country dummy,
while negatively associated with the number of peer universities within the
same country that have banned ChatGPT. We discuss the challenges faced by
universities based our empirical findings.
Related papers
- From Prohibition to Adoption: How Hong Kong Universities Are Navigating ChatGPT in Academic Workflows [14.889393003546058]
This paper aims at comparing the time when Hong Kong universities used to ban ChatGPT to the current periods where it has become integrated in the academic processes.
Bolted by concerns of integrity and ethical issues in technologies, institutions have adapted by moving towards the center adopting AI literacy and responsibility policies.
arXiv Detail & Related papers (2024-10-02T16:04:33Z) - Could ChatGPT get an Engineering Degree? Evaluating Higher Education Vulnerability to AI Assistants [175.9723801486487]
We evaluate whether two AI assistants, GPT-3.5 and GPT-4, can adequately answer assessment questions.
GPT-4 answers an average of 65.8% of questions correctly, and can even produce the correct answer across at least one prompting strategy for 85.1% of questions.
Our results call for revising program-level assessment design in higher education in light of advances in generative AI.
arXiv Detail & Related papers (2024-08-07T12:11:49Z) - The use of ChatGPT in higher education: The advantages and disadvantages [0.0]
ChatGPT is an artificial intelligence technology developed by OpenAI.
This study examines the application of ChatGPT in higher education to comprehend and produce high-level instruction.
arXiv Detail & Related papers (2024-03-28T09:00:05Z) - Comprehensive Assessment of Toxicity in ChatGPT [49.71090497696024]
We evaluate the toxicity in ChatGPT by utilizing instruction-tuning datasets.
prompts in creative writing tasks can be 2x more likely to elicit toxic responses.
Certain deliberately toxic prompts, designed in earlier studies, no longer yield harmful responses.
arXiv Detail & Related papers (2023-11-03T14:37:53Z) - Primacy Effect of ChatGPT [69.49920102917598]
We study the primacy effect of ChatGPT: the tendency of selecting the labels at earlier positions as the answer.
We hope that our experiments and analyses provide additional insights into building more reliable ChatGPT-based solutions.
arXiv Detail & Related papers (2023-10-20T00:37:28Z) - Last Week with ChatGPT: A Weibo Study on Social Perspective Regarding ChatGPT for Education and Beyond [12.935870689618202]
This study uses ChatGPT, currently the most powerful and popular AI tool, as a representative example to analyze how the Chinese public perceives the potential of large language models (LLMs) for educational and general purposes.
The study also serves as the first effort to investigate the changes in public opinion as AI technologies become more advanced and intelligent.
arXiv Detail & Related papers (2023-06-07T10:45:02Z) - AI, write an essay for me: A large-scale comparison of human-written
versus ChatGPT-generated essays [66.36541161082856]
ChatGPT and similar generative AI models have attracted hundreds of millions of users.
This study compares human-written versus ChatGPT-generated argumentative student essays.
arXiv Detail & Related papers (2023-04-24T12:58:28Z) - To ChatGPT, or not to ChatGPT: That is the question! [78.407861566006]
This study provides a comprehensive and contemporary assessment of the most recent techniques in ChatGPT detection.
We have curated a benchmark dataset consisting of prompts from ChatGPT and humans, including diverse questions from medical, open Q&A, and finance domains.
Our evaluation results demonstrate that none of the existing methods can effectively detect ChatGPT-generated content.
arXiv Detail & Related papers (2023-04-04T03:04:28Z) - On the Educational Impact of ChatGPT: Is Artificial Intelligence Ready
to Obtain a University Degree? [0.0]
We evaluate the influence of ChatGPT on university education.
We discuss how computer science higher education should adapt to tools like ChatGPT.
arXiv Detail & Related papers (2023-03-20T14:27:37Z) - A Categorical Archive of ChatGPT Failures [47.64219291655723]
ChatGPT, developed by OpenAI, has been trained using massive amounts of data and simulates human conversation.
It has garnered significant attention due to its ability to effectively answer a broad range of human inquiries.
However, a comprehensive analysis of ChatGPT's failures is lacking, which is the focus of this study.
arXiv Detail & Related papers (2023-02-06T04:21:59Z) - The political ideology of conversational AI: Converging evidence on
ChatGPT's pro-environmental, left-libertarian orientation [0.0]
OpenAI introduced ChatGPT, a state-of-the-art dialogue model that can converse with its human counterparts.
This paper focuses on one of democratic society's most important decision-making processes: political elections.
We uncover ChatGPT's pro-environmental, left-libertarian ideology.
arXiv Detail & Related papers (2023-01-05T07:13:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.