Harnessing LLMs in Curricular Design: Using GPT-4 to Support Authoring
of Learning Objectives
- URL: http://arxiv.org/abs/2306.17459v1
- Date: Fri, 30 Jun 2023 08:15:18 GMT
- Title: Harnessing LLMs in Curricular Design: Using GPT-4 to Support Authoring
of Learning Objectives
- Authors: Pragnya Sridhar and Aidan Doyle and Arav Agarwal and Christopher
Bogart and Jaromir Savelka and Majd Sakr
- Abstract summary: We evaluated the capability of a generative pre-trained transformer (GPT-4) to automatically generate high-quality learning objectives (LOs)
LOs articulate the knowledge and skills learners are intended to acquire by engaging with a course.
We analyzed the generated LOs if they follow certain best practices such as beginning with action verbs from Bloom's taxonomy in regards to the level of sophistication intended.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We evaluated the capability of a generative pre-trained transformer (GPT-4)
to automatically generate high-quality learning objectives (LOs) in the context
of a practically oriented university course on Artificial Intelligence.
Discussions of opportunities (e.g., content generation, explanation) and risks
(e.g., cheating) of this emerging technology in education have intensified, but
to date there has not been a study of the models' capabilities in supporting
the course design and authoring of LOs. LOs articulate the knowledge and skills
learners are intended to acquire by engaging with a course. To be effective,
LOs must focus on what students are intended to achieve, focus on specific
cognitive processes, and be measurable. Thus, authoring high-quality LOs is a
challenging and time consuming (i.e., expensive) effort. We evaluated 127 LOs
that were automatically generated based on a carefully crafted prompt (detailed
guidelines on high-quality LOs authoring) submitted to GPT-4 for conceptual
modules and projects of an AI Practitioner course. We analyzed the generated
LOs if they follow certain best practices such as beginning with action verbs
from Bloom's taxonomy in regards to the level of sophistication intended. Our
analysis showed that the generated LOs are sensible, properly expressed (e.g.,
starting with an action verb), and that they largely operate at the appropriate
level of Bloom's taxonomy, respecting the different nature of the conceptual
modules (lower levels) and projects (higher levels). Our results can be
leveraged by instructors and curricular designers wishing to take advantage of
the state-of-the-art generative models to support their curricular and course
design efforts.
Related papers
- Benchmarking Vision Language Model Unlearning via Fictitious Facial Identity Dataset [94.13848736705575]
We introduce Facial Identity Unlearning Benchmark (FIUBench), a novel VLM unlearning benchmark designed to robustly evaluate the effectiveness of unlearning algorithms.
We apply a two-stage evaluation pipeline that is designed to precisely control the sources of information and their exposure levels.
Through the evaluation of four baseline VLM unlearning algorithms within FIUBench, we find that all methods remain limited in their unlearning performance.
arXiv Detail & Related papers (2024-11-05T23:26:10Z) - A Novel Psychometrics-Based Approach to Developing Professional Competency Benchmark for Large Language Models [0.0]
We propose a comprehensive approach to benchmark development based on rigorous psychometric principles.
We make the first attempt to illustrate this approach by creating a new benchmark in the field of pedagogy and education.
We construct a novel benchmark guided by the Bloom's taxonomy and rigorously designed by a consortium of education experts trained in test development.
arXiv Detail & Related papers (2024-10-29T19:32:43Z) - GIVE: Structured Reasoning with Knowledge Graph Inspired Veracity Extrapolation [108.2008975785364]
Graph Inspired Veracity Extrapolation (GIVE) is a novel reasoning framework that integrates the parametric and non-parametric memories.
Our method facilitates a more logical and step-wise reasoning approach akin to experts' problem-solving, rather than gold answer retrieval.
arXiv Detail & Related papers (2024-10-11T03:05:06Z) - Evaluating Human Alignment and Model Faithfulness of LLM Rationale [66.75309523854476]
We study how well large language models (LLMs) explain their generations through rationales.
We show that prompting-based methods are less "faithful" than attribution-based explanations.
arXiv Detail & Related papers (2024-06-28T20:06:30Z) - Unleashing the potential of prompt engineering in Large Language Models: a comprehensive review [1.6006550105523192]
Review explores the pivotal role of prompt engineering in unleashing the capabilities of Large Language Models (LLMs)
Examines both foundational and advanced methodologies of prompt engineering, including techniques such as self-consistency, chain-of-thought, and generated knowledge.
Review also reflects the essential role of prompt engineering in advancing AI capabilities, providing a structured framework for future research and application.
arXiv Detail & Related papers (2023-10-23T09:15:18Z) - ActiveAI: Introducing AI Literacy for Middle School Learners with
Goal-based Scenario Learning [0.0]
The ActiveAI project addresses key challenges in AI education for grades 7-9 students.
The app incorporates a variety of learner inputs like sliders, steppers, and collectors to enhance understanding.
The project is currently in the implementation stage, leveraging the intelligent tutor design principles for app development.
arXiv Detail & Related papers (2023-08-21T11:43:43Z) - Scaling Evidence-based Instructional Design Expertise through Large
Language Models [0.0]
This paper explores leveraging Large Language Models (LLMs), specifically GPT-4, in the field of instructional design.
With a focus on scaling evidence-based instructional design expertise, our research aims to bridge the gap between theoretical educational studies and practical implementation.
We discuss the benefits and limitations of AI-driven content generation, emphasizing the necessity of human oversight in ensuring the quality of educational materials.
arXiv Detail & Related papers (2023-05-31T17:54:07Z) - Knowledge-Augmented Reasoning Distillation for Small Language Models in
Knowledge-Intensive Tasks [90.11273439036455]
Large Language Models (LLMs) have shown promising performance in knowledge-intensive reasoning tasks.
We propose Knowledge-Augmented Reasoning Distillation (KARD), a novel method that fine-tunes small LMs to generate rationales from LLMs with augmented knowledge retrieved from an external knowledge base.
We empirically show that KARD significantly improves the performance of small T5 and GPT models on the challenging knowledge-intensive reasoning datasets.
arXiv Detail & Related papers (2023-05-28T13:00:00Z) - A Survey of Knowledge Enhanced Pre-trained Language Models [78.56931125512295]
We present a comprehensive review of Knowledge Enhanced Pre-trained Language Models (KE-PLMs)
For NLU, we divide the types of knowledge into four categories: linguistic knowledge, text knowledge, knowledge graph (KG) and rule knowledge.
The KE-PLMs for NLG are categorized into KG-based and retrieval-based methods.
arXiv Detail & Related papers (2022-11-11T04:29:02Z) - AANG: Automating Auxiliary Learning [110.36191309793135]
We present an approach for automatically generating a suite of auxiliary objectives.
We achieve this by deconstructing existing objectives within a novel unified taxonomy, identifying connections between them, and generating new ones based on the uncovered structure.
This leads us to a principled and efficient algorithm for searching the space of generated objectives to find those most useful to a specified end-task.
arXiv Detail & Related papers (2022-05-27T16:32:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.