The Revolution Has Arrived: What the Current State of Large Language Models in Education Implies for the Future
- URL: http://arxiv.org/abs/2507.02180v1
- Date: Wed, 02 Jul 2025 22:23:26 GMT
- Title: The Revolution Has Arrived: What the Current State of Large Language Models in Education Implies for the Future
- Authors: Russell Beale,
- Abstract summary: We review the domains in which large language models have been used, and discuss a variety of use cases, their successes and failures.<n>We consider the main design challenges facing LLMs if they are to become truly helpful and effective as educational systems.<n>We make clear that the new interaction paradigms they bring are significant and argue that this approach will become so ubiquitous it will become the default way in which we interact with technologies.
- Score: 1.2691047660244332
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Large language Models have only been widely available since 2022 and yet in less than three years have had a significant impact on approaches to education and educational technology. Here we review the domains in which they have been used, and discuss a variety of use cases, their successes and failures. We then progress to discussing how this is changing the dynamic for learners and educators, consider the main design challenges facing LLMs if they are to become truly helpful and effective as educational systems, and reflect on the learning paradigms they support. We make clear that the new interaction paradigms they bring are significant and argue that this approach will become so ubiquitous it will become the default way in which we interact with technologies, and revolutionise what people expect from computer systems in general. This leads us to present some specific and significant considerations for the design of educational technology in the future that are likely to be needed to ensure acceptance by the changing expectations of learners and users.
Related papers
- Large Language Models Will Change The Way Children Think About Technology And Impact Every Interaction Paradigm [1.2691047660244332]
We review the effects of Large Language Models on education so far, and make the case that these effects are minor compared to the upcoming changes that are occurring.<n>We present a small scenario and self-ethnographic study demonstrating the effects of these changes, and define five significant considerations that interactive systems designers will have to accommodate in the future.
arXiv Detail & Related papers (2025-04-18T13:01:27Z) - "Would You Want an AI Tutor?" Understanding Stakeholder Perceptions of LLM-based Systems in the Classroom [7.915714424668589]
We argue that understanding the perceptions of those directly or indirectly impacted by Large Language Models in the classroom is essential for ensuring responsible use of AI in this critical domain.<n>We propose the Contextualized Perceptions for the Adoption of LLMs in Education (Co-PALE) framework, which can be used to systematically elicit perceptions.
arXiv Detail & Related papers (2025-02-02T16:50:08Z) - From MOOC to MAIC: Reshaping Online Teaching and Learning through LLM-driven Agents [78.15899922698631]
MAIC (Massive AI-empowered Course) is a new form of online education that leverages LLM-driven multi-agent systems to construct an AI-augmented classroom.
We conduct preliminary experiments at Tsinghua University, one of China's leading universities.
arXiv Detail & Related papers (2024-09-05T13:22:51Z) - A review on the use of large language models as virtual tutors [5.014059576916173]
Large Language Models (LLMs) have produced a huge buzz in several fields and industrial sectors.
This review seeks to provide a comprehensive overview of those solutions designed specifically to generate and evaluate educational materials.
As expected, the most common role of these systems is as virtual tutors for automatic question generation.
arXiv Detail & Related papers (2024-05-20T12:33:42Z) - Bringing Generative AI to Adaptive Learning in Education [58.690250000579496]
We shed light on the intersectional studies of generative AI and adaptive learning.
We argue that this union will contribute significantly to the development of the next-stage learning format in education.
arXiv Detail & Related papers (2024-02-02T23:54:51Z) - Adapting Large Language Models for Education: Foundational Capabilities, Potentials, and Challenges [60.62904929065257]
Large language models (LLMs) offer possibility for resolving this issue by comprehending individual requests.
This paper reviews the recently emerged LLM research related to educational capabilities, including mathematics, writing, programming, reasoning, and knowledge-based question answering.
arXiv Detail & Related papers (2023-12-27T14:37:32Z) - A Survey of Large Language Models [81.06947636926638]
Language modeling has been widely studied for language understanding and generation in the past two decades.
Recently, pre-trained language models (PLMs) have been proposed by pre-training Transformer models over large-scale corpora.
To discriminate the difference in parameter scale, the research community has coined the term large language models (LLM) for the PLMs of significant size.
arXiv Detail & Related papers (2023-03-31T17:28:46Z) - Situating Recommender Systems in Practice: Towards Inductive Learning
and Incremental Updates [9.47821118140383]
We formalize both concepts and contextualize recommender systems work from the last six years.
We then discuss why and how future work should move towards inductive learning and incremental updates for recommendation model design and evaluation.
arXiv Detail & Related papers (2022-11-11T17:29:35Z) - INTERN: A New Learning Paradigm Towards General Vision [117.3343347061931]
We develop a new learning paradigm named INTERN.
By learning with supervisory signals from multiple sources in multiple stages, the model being trained will develop strong generalizability.
In most cases, our models, adapted with only 10% of the training data in the target domain, outperform the counterparts trained with the full set of data.
arXiv Detail & Related papers (2021-11-16T18:42:50Z) - Personalized Education in the AI Era: What to Expect Next? [76.37000521334585]
The objective of personalized learning is to design an effective knowledge acquisition track that matches the learner's strengths and bypasses her weaknesses to meet her desired goal.
In recent years, the boost of artificial intelligence (AI) and machine learning (ML) has unfolded novel perspectives to enhance personalized education.
arXiv Detail & Related papers (2021-01-19T12:23:32Z) - Computer-Aided Personalized Education [15.811740322935476]
The number of US students taking introductory courses has grown three-fold in the past decade.
Massive open online courses (MOOCs) have been promoted as a way to ease this strain.
Personalized education relying on computational tools can address this challenge.
arXiv Detail & Related papers (2020-07-07T18:00:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.