Harnessing LLMs in Curricular Design: Using GPT-4 to Support Authoring
of Learning Objectives
- URL: http://arxiv.org/abs/2306.17459v1
- Date: Fri, 30 Jun 2023 08:15:18 GMT
- Title: Harnessing LLMs in Curricular Design: Using GPT-4 to Support Authoring
of Learning Objectives
- Authors: Pragnya Sridhar and Aidan Doyle and Arav Agarwal and Christopher
Bogart and Jaromir Savelka and Majd Sakr
- Abstract summary: We evaluated the capability of a generative pre-trained transformer (GPT-4) to automatically generate high-quality learning objectives (LOs)
LOs articulate the knowledge and skills learners are intended to acquire by engaging with a course.
We analyzed the generated LOs if they follow certain best practices such as beginning with action verbs from Bloom's taxonomy in regards to the level of sophistication intended.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We evaluated the capability of a generative pre-trained transformer (GPT-4)
to automatically generate high-quality learning objectives (LOs) in the context
of a practically oriented university course on Artificial Intelligence.
Discussions of opportunities (e.g., content generation, explanation) and risks
(e.g., cheating) of this emerging technology in education have intensified, but
to date there has not been a study of the models' capabilities in supporting
the course design and authoring of LOs. LOs articulate the knowledge and skills
learners are intended to acquire by engaging with a course. To be effective,
LOs must focus on what students are intended to achieve, focus on specific
cognitive processes, and be measurable. Thus, authoring high-quality LOs is a
challenging and time consuming (i.e., expensive) effort. We evaluated 127 LOs
that were automatically generated based on a carefully crafted prompt (detailed
guidelines on high-quality LOs authoring) submitted to GPT-4 for conceptual
modules and projects of an AI Practitioner course. We analyzed the generated
LOs if they follow certain best practices such as beginning with action verbs
from Bloom's taxonomy in regards to the level of sophistication intended. Our
analysis showed that the generated LOs are sensible, properly expressed (e.g.,
starting with an action verb), and that they largely operate at the appropriate
level of Bloom's taxonomy, respecting the different nature of the conceptual
modules (lower levels) and projects (higher levels). Our results can be
leveraged by instructors and curricular designers wishing to take advantage of
the state-of-the-art generative models to support their curricular and course
design efforts.
Related papers
- CourseAssist: Pedagogically Appropriate AI Tutor for Computer Science Education [1.052788652996288]
This poster introduces CourseAssist, a novel LLM-based tutoring system tailored for computer science education.
Unlike generic LLM systems, CourseAssist uses retrieval-augmented generation, user intent classification, and question decomposition to align AI responses with specific course materials and learning objectives.
arXiv Detail & Related papers (2024-05-01T20:43:06Z) - ActiveAI: Introducing AI Literacy for Middle School Learners with
Goal-based Scenario Learning [0.0]
The ActiveAI project addresses key challenges in AI education for grades 7-9 students.
The app incorporates a variety of learner inputs like sliders, steppers, and collectors to enhance understanding.
The project is currently in the implementation stage, leveraging the intelligent tutor design principles for app development.
arXiv Detail & Related papers (2023-08-21T11:43:43Z) - Scaling Evidence-based Instructional Design Expertise through Large
Language Models [0.0]
This paper explores leveraging Large Language Models (LLMs), specifically GPT-4, in the field of instructional design.
With a focus on scaling evidence-based instructional design expertise, our research aims to bridge the gap between theoretical educational studies and practical implementation.
We discuss the benefits and limitations of AI-driven content generation, emphasizing the necessity of human oversight in ensuring the quality of educational materials.
arXiv Detail & Related papers (2023-05-31T17:54:07Z) - Do Large Language Models Know What They Don't Know? [74.65014158544011]
Large language models (LLMs) have a wealth of knowledge that allows them to excel in various Natural Language Processing (NLP) tasks.
Despite their vast knowledge, LLMs are still limited by the amount of information they can accommodate and comprehend.
This study aims to evaluate LLMs' self-knowledge by assessing their ability to identify unanswerable or unknowable questions.
arXiv Detail & Related papers (2023-05-29T15:30:13Z) - Knowledge-Augmented Reasoning Distillation for Small Language Models in
Knowledge-Intensive Tasks [90.11273439036455]
Large Language Models (LLMs) have shown promising performance in knowledge-intensive reasoning tasks.
We propose Knowledge-Augmented Reasoning Distillation (KARD), a novel method that fine-tunes small LMs to generate rationales from LLMs with augmented knowledge retrieved from an external knowledge base.
We empirically show that KARD significantly improves the performance of small T5 and GPT models on the challenging knowledge-intensive reasoning datasets.
arXiv Detail & Related papers (2023-05-28T13:00:00Z) - HowkGPT: Investigating the Detection of ChatGPT-generated University
Student Homework through Context-Aware Perplexity Analysis [13.098764928946208]
HowkGPT is built upon a dataset of academic assignments and accompanying metadata.
It computes perplexity scores for student-authored and ChatGPT-generated responses.
It further refines its analysis by defining category-specific thresholds.
arXiv Detail & Related papers (2023-05-26T11:07:25Z) - A Survey of Knowledge Enhanced Pre-trained Language Models [78.56931125512295]
We present a comprehensive review of Knowledge Enhanced Pre-trained Language Models (KE-PLMs)
For NLU, we divide the types of knowledge into four categories: linguistic knowledge, text knowledge, knowledge graph (KG) and rule knowledge.
The KE-PLMs for NLG are categorized into KG-based and retrieval-based methods.
arXiv Detail & Related papers (2022-11-11T04:29:02Z) - Modelling Assessment Rubrics through Bayesian Networks: a Pragmatic
Approach [59.77710485234197]
This paper presents an approach to deriving a learner model directly from an assessment rubric.
We illustrate how the approach can be applied to automatize the human assessment of an activity developed for testing computational thinking skills.
arXiv Detail & Related papers (2022-09-07T10:09:12Z) - AANG: Automating Auxiliary Learning [110.36191309793135]
We present an approach for automatically generating a suite of auxiliary objectives.
We achieve this by deconstructing existing objectives within a novel unified taxonomy, identifying connections between them, and generating new ones based on the uncovered structure.
This leads us to a principled and efficient algorithm for searching the space of generated objectives to find those most useful to a specified end-task.
arXiv Detail & Related papers (2022-05-27T16:32:28Z) - Technology Readiness Levels for AI & ML [79.22051549519989]
Development of machine learning systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end.
Engineering systems follow well-defined processes and testing standards to streamline development for high-quality, reliable results.
We propose a proven systems engineering approach for machine learning development and deployment.
arXiv Detail & Related papers (2020-06-21T17:14:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.