Towards Mitigating ChatGPT's Negative Impact on Education: Optimizing
Question Design through Bloom's Taxonomy
- URL: http://arxiv.org/abs/2304.08176v1
- Date: Fri, 31 Mar 2023 00:01:59 GMT
- Title: Towards Mitigating ChatGPT's Negative Impact on Education: Optimizing
Question Design through Bloom's Taxonomy
- Authors: Saber Elsayed
- Abstract summary: This paper introduces an evolutionary approach that aims to identify the best set of Bloom's taxonomy keywords to generate questions that these tools have low confidence in answering.
The effectiveness of this approach is evaluated through a case study that uses questions from a Data Structures and Representation course being taught at the University of New South Wales in Canberra, Australia.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The popularity of generative text AI tools in answering questions has led to
concerns regarding their potential negative impact on students' academic
performance and the challenges that educators face in evaluating student
learning. To address these concerns, this paper introduces an evolutionary
approach that aims to identify the best set of Bloom's taxonomy keywords to
generate questions that these tools have low confidence in answering. The
effectiveness of this approach is evaluated through a case study that uses
questions from a Data Structures and Representation course being taught at the
University of New South Wales in Canberra, Australia. The results demonstrate
that the optimization algorithm is able to find keywords from different
cognitive levels to create questions that ChatGPT has low confidence in
answering. This study is a step forward to offer valuable insights for
educators seeking to create more effective questions that promote critical
thinking among students.
Related papers
- ChatGPT in Research and Education: Exploring Benefits and Threats [1.9466452723529557]
ChatGPT is a powerful language model developed by OpenAI.
It offers personalized feedback, enhances accessibility, enables interactive conversations, assists with lesson preparation and evaluation, and introduces new methods for teaching complex subjects.
ChatGPT also poses challenges to traditional education and research systems.
These challenges include the risk of cheating on online exams, the generation of human-like text that may compromise academic integrity, and difficulties in assessing the reliability of information generated by AI.
arXiv Detail & Related papers (2024-11-05T05:29:00Z) - Research on the Application of Large Language Models in Automatic Question Generation: A Case Study of ChatGLM in the Context of High School Information Technology Curriculum [3.0753648264454547]
The model is guided to generate diverse questions, which are then comprehensively evaluated by domain experts.
The results indicate that ChatGLM outperforms human-generated questions in terms of clarity and teachers' willingness to use.
arXiv Detail & Related papers (2024-08-21T11:38:32Z) - Could ChatGPT get an Engineering Degree? Evaluating Higher Education Vulnerability to AI Assistants [175.9723801486487]
We evaluate whether two AI assistants, GPT-3.5 and GPT-4, can adequately answer assessment questions.
GPT-4 answers an average of 65.8% of questions correctly, and can even produce the correct answer across at least one prompting strategy for 85.1% of questions.
Our results call for revising program-level assessment design in higher education in light of advances in generative AI.
arXiv Detail & Related papers (2024-08-07T12:11:49Z) - How to Engage Your Readers? Generating Guiding Questions to Promote Active Reading [60.19226384241482]
We introduce GuidingQ, a dataset of 10K in-text questions from textbooks and scientific articles.
We explore various approaches to generate such questions using language models.
We conduct a human study to understand the implication of such questions on reading comprehension.
arXiv Detail & Related papers (2024-07-19T13:42:56Z) - Evaluating and Optimizing Educational Content with Large Language Model Judgments [52.33701672559594]
We use Language Models (LMs) as educational experts to assess the impact of various instructions on learning outcomes.
We introduce an instruction optimization approach in which one LM generates instructional materials using the judgments of another LM as a reward function.
Human teachers' evaluations of these LM-generated worksheets show a significant alignment between the LM judgments and human teacher preferences.
arXiv Detail & Related papers (2024-03-05T09:09:15Z) - Covering Uncommon Ground: Gap-Focused Question Generation for Answer
Assessment [75.59538732476346]
We focus on the problem of generating such gap-focused questions (GFQs) automatically.
We define the task, highlight key desired aspects of a good GFQ, and propose a model that satisfies these.
arXiv Detail & Related papers (2023-07-06T22:21:42Z) - UKP-SQuARE: An Interactive Tool for Teaching Question Answering [61.93372227117229]
The exponential growth of question answering (QA) has made it an indispensable topic in any Natural Language Processing (NLP) course.
We introduce UKP-SQuARE as a platform for QA education.
Students can run, compare, and analyze various QA models from different perspectives.
arXiv Detail & Related papers (2023-05-31T11:29:04Z) - Question Personalization in an Intelligent Tutoring System [5.644357169513361]
We show that generating versions of the questions suitable for students at different levels of subject proficiency improves student learning gains.
This insight demonstrates that the linguistic realization of questions in an ITS affects the learning outcomes for students.
arXiv Detail & Related papers (2022-05-25T15:23:51Z) - Real-Time Cognitive Evaluation of Online Learners through Automatically
Generated Questions [0.0]
The paper presents an approach to generate questions from a given video lecture automatically.
The generated questions are aimed to evaluate learners' lower-level cognitive abilities.
arXiv Detail & Related papers (2021-06-06T05:45:56Z) - Neural Multi-Task Learning for Teacher Question Detection in Online
Classrooms [50.19997675066203]
We build an end-to-end neural framework that automatically detects questions from teachers' audio recordings.
By incorporating multi-task learning techniques, we are able to strengthen the understanding of semantic relations among different types of questions.
arXiv Detail & Related papers (2020-05-16T02:17:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.