Can We Trust AI-Generated Educational Content? Comparative Analysis of
Human and AI-Generated Learning Resources
- URL: http://arxiv.org/abs/2306.10509v2
- Date: Mon, 3 Jul 2023 06:59:27 GMT
- Title: Can We Trust AI-Generated Educational Content? Comparative Analysis of
Human and AI-Generated Learning Resources
- Authors: Paul Denny and Hassan Khosravi and Arto Hellas and Juho Leinonen and
Sami Sarsa
- Abstract summary: Large language models (LLMs) appear to offer a promising solution to the rapid creation of learning materials at scale.
We compare the quality of resources generated by an LLM with those created by students as part of a learnersourcing activity.
Our results show that the quality of AI-generated resources, as perceived by students, is equivalent to the quality of resources generated by their peers.
- Score: 4.528957284486784
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As an increasing number of students move to online learning platforms that
deliver personalized learning experiences, there is a great need for the
production of high-quality educational content. Large language models (LLMs)
appear to offer a promising solution to the rapid creation of learning
materials at scale, reducing the burden on instructors. In this study, we
investigated the potential for LLMs to produce learning resources in an
introductory programming context, by comparing the quality of the resources
generated by an LLM with those created by students as part of a learnersourcing
activity. Using a blind evaluation, students rated the correctness and
helpfulness of resources generated by AI and their peers, after both were
initially provided with identical exemplars. Our results show that the quality
of AI-generated resources, as perceived by students, is equivalent to the
quality of resources generated by their peers. This suggests that AI-generated
resources may serve as viable supplementary material in certain contexts.
Resources generated by LLMs tend to closely mirror the given exemplars, whereas
student-generated resources exhibit greater variety in terms of content length
and specific syntax features used. The study highlights the need for further
research exploring different types of learning resources and a broader range of
subject areas, and understanding the long-term impact of AI-generated resources
on learning outcomes.
Related papers
- A Survey on RAG Meeting LLMs: Towards Retrieval-Augmented Large Language Models [71.25225058845324]
Large Language Models (LLMs) have demonstrated revolutionary abilities in language understanding and generation.
Retrieval-Augmented Generation (RAG) can offer reliable and up-to-date external knowledge.
RA-LLMs have emerged to harness external and authoritative knowledge bases, rather than relying on the model's internal knowledge.
arXiv Detail & Related papers (2024-05-10T02:48:45Z) - Integrating A.I. in Higher Education: Protocol for a Pilot Study with 'SAMCares: An Adaptive Learning Hub' [0.6990493129893112]
This research aims to introduce an innovative study buddy we will be calling the 'SAMCares'
The system leverages a Large Language Model (LLM) and Retriever-Augmented Generation (RAG) to offer real-time, context-aware, and adaptive educational support.
arXiv Detail & Related papers (2024-05-01T05:39:07Z) - Cross-Data Knowledge Graph Construction for LLM-enabled Educational Question-Answering System: A Case Study at HCMUT [2.8000537365271367]
Large language models (LLMs) have emerged as a vibrant research topic.
LLMs face challenges in remembering events, incorporating new information, and addressing domain-specific issues or hallucinations.
This article proposes a method for automatically constructing a Knowledge Graph from multiple data sources.
arXiv Detail & Related papers (2024-04-14T16:34:31Z) - A Knowledge Plug-and-Play Test Bed for Open-domain Dialogue Generation [51.31429493814664]
We present a benchmark named multi-source Wizard of Wikipedia for evaluating multi-source dialogue knowledge selection and response generation.
We propose a new challenge, dialogue knowledge plug-and-play, which aims to test an already trained dialogue model on using new support knowledge from previously unseen sources.
arXiv Detail & Related papers (2024-03-06T06:54:02Z) - Democratizing Reasoning Ability: Tailored Learning from Large Language
Model [97.4921006089966]
We propose a tailored learning approach to distill such reasoning ability to smaller LMs.
We exploit the potential of LLM as a reasoning teacher by building an interactive multi-round learning paradigm.
To exploit the reasoning potential of the smaller LM, we propose self-reflection learning to motivate the student to learn from self-made mistakes.
arXiv Detail & Related papers (2023-10-20T07:50:10Z) - ExpeL: LLM Agents Are Experiential Learners [60.54312035818746]
We introduce the Experiential Learning (ExpeL) agent to allow learning from agent experiences without requiring parametric updates.
Our agent autonomously gathers experiences and extracts knowledge using natural language from a collection of training tasks.
At inference, the agent recalls its extracted insights and past experiences to make informed decisions.
arXiv Detail & Related papers (2023-08-20T03:03:34Z) - Prototyping the use of Large Language Models (LLMs) for adult learning
content creation at scale [0.6628807224384127]
This paper presents an investigation into the use of Large Language Models (LLMs) in asynchronous course creation.
We developed a course prototype leveraging an LLM, implementing a robust human-in-the-loop process.
Initial findings indicate that taking this approach can indeed facilitate faster content creation without compromising on accuracy or clarity.
arXiv Detail & Related papers (2023-06-02T10:58:05Z) - Multi-source Education Knowledge Graph Construction and Fusion for
College Curricula [3.981835878719391]
We propose an automated framework for knowledge extraction, visual KG construction, and graph fusion for the major of Electronic Information.
Our objective is to enhance the learning efficiency of students and to explore new educational paradigms enabled by AI.
arXiv Detail & Related papers (2023-05-08T09:25:41Z) - Learning To Rank Resources with GNN [7.337247167823921]
We propose a graph neural network (GNN) based approach to learning-to-rank that is capable of modeling resource-query and resource-resource relationships.
Our method outperforms the state-of-the-art by 6.4% to 42% on various performance metrics.
arXiv Detail & Related papers (2023-04-17T02:01:45Z) - A Transfer Learning Pipeline for Educational Resource Discovery with
Application in Leading Paragraph Generation [71.92338855383238]
We propose a pipeline that automates web resource discovery for novel domains.
The pipeline achieves F1 scores of 0.94 and 0.82 when evaluated on two similar but novel target domains.
This is the first study that considers various web resources for survey generation.
arXiv Detail & Related papers (2022-01-07T03:35:40Z) - Personalized Education in the AI Era: What to Expect Next? [76.37000521334585]
The objective of personalized learning is to design an effective knowledge acquisition track that matches the learner's strengths and bypasses her weaknesses to meet her desired goal.
In recent years, the boost of artificial intelligence (AI) and machine learning (ML) has unfolded novel perspectives to enhance personalized education.
arXiv Detail & Related papers (2021-01-19T12:23:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.