Large Language Models Will Change The Way Children Think About Technology And Impact Every Interaction Paradigm
- URL: http://arxiv.org/abs/2504.13667v1
- Date: Fri, 18 Apr 2025 13:01:27 GMT
- Title: Large Language Models Will Change The Way Children Think About Technology And Impact Every Interaction Paradigm
- Authors: Russell Beale,
- Abstract summary: We review the effects of Large Language Models on education so far, and make the case that these effects are minor compared to the upcoming changes that are occurring.<n>We present a small scenario and self-ethnographic study demonstrating the effects of these changes, and define five significant considerations that interactive systems designers will have to accommodate in the future.
- Score: 1.2691047660244332
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper presents a hopeful perspective on the potentially dramatic impacts of Large Language Models on how we children learn and how they will expect to interact with technology. We review the effects of LLMs on education so far, and make the case that these effects are minor compared to the upcoming changes that are occurring. We present a small scenario and self-ethnographic study demonstrating the effects of these changes, and define five significant considerations that interactive systems designers will have to accommodate in the future.
Related papers
- The Revolution Has Arrived: What the Current State of Large Language Models in Education Implies for the Future [1.2691047660244332]
We review the domains in which large language models have been used, and discuss a variety of use cases, their successes and failures.<n>We consider the main design challenges facing LLMs if they are to become truly helpful and effective as educational systems.<n>We make clear that the new interaction paradigms they bring are significant and argue that this approach will become so ubiquitous it will become the default way in which we interact with technologies.
arXiv Detail & Related papers (2025-07-02T22:23:26Z) - Don't Get Too Excited -- Eliciting Emotions in LLMs [1.8399318639816038]
This paper investigates the challenges of affect control in large language models (LLMs)<n>We evaluate state-of-the-art open-weight LLMs to assess their affective expressive range.<n>We quantify the models' capacity to express a wide spectrum of emotions and how they fluctuate during interactions.
arXiv Detail & Related papers (2025-03-04T10:06:41Z) - DASKT: A Dynamic Affect Simulation Method for Knowledge Tracing [51.665582274736785]
Knowledge Tracing (KT) predicts future performance by students' historical computation, and understanding students' affective states can enhance the effectiveness of KT.<n>We propose Affect Dynamic Knowledge Tracing (DASKT) to explore the impact of various student affective states on their knowledge states.<n>Our research highlights a promising avenue for future studies, focusing on achieving high interpretability and accuracy.
arXiv Detail & Related papers (2025-01-18T10:02:10Z) - The Impact of Large Language Models in Academia: from Writing to Speaking [42.1505375956748]
We examined and compared the words used in writing and speaking based on more than 30,000 papers and 1,000 presentations from machine learning conferences.
Our results show that LLM-style words such as "significant" have been used more frequently in abstracts and oral presentations.
The impact on speaking is beginning to emerge and is likely to grow in the future, calling attention to the implicit influence and ripple effect of LLMs on human society.
arXiv Detail & Related papers (2024-09-20T17:54:16Z) - Modulating Language Model Experiences through Frictions [56.17593192325438]
Over-consumption of language model outputs risks propagating unchecked errors in the short-term and damaging human capabilities for critical thinking in the long-term.
We propose selective frictions for language model experiences, inspired by behavioral science interventions, to dampen misuse.
arXiv Detail & Related papers (2024-06-24T16:31:11Z) - Evaluating and Optimizing Educational Content with Large Language Model Judgments [52.33701672559594]
We use Language Models (LMs) as educational experts to assess the impact of various instructions on learning outcomes.
We introduce an instruction optimization approach in which one LM generates instructional materials using the judgments of another LM as a reward function.
Human teachers' evaluations of these LM-generated worksheets show a significant alignment between the LM judgments and human teacher preferences.
arXiv Detail & Related papers (2024-03-05T09:09:15Z) - Bringing Generative AI to Adaptive Learning in Education [58.690250000579496]
We shed light on the intersectional studies of generative AI and adaptive learning.
We argue that this union will contribute significantly to the development of the next-stage learning format in education.
arXiv Detail & Related papers (2024-02-02T23:54:51Z) - SINC: Self-Supervised In-Context Learning for Vision-Language Tasks [64.44336003123102]
We propose a framework to enable in-context learning in large language models.
A meta-model can learn on self-supervised prompts consisting of tailored demonstrations.
Experiments show that SINC outperforms gradient-based methods in various vision-language tasks.
arXiv Detail & Related papers (2023-07-15T08:33:08Z) - Opportunities and Challenges in Neural Dialog Tutoring [54.07241332881601]
We rigorously analyze various generative language models on two dialog tutoring datasets for language learning.
We find that although current approaches can model tutoring in constrained learning scenarios, they perform poorly in less constrained scenarios.
Our human quality evaluation shows that both models and ground-truth annotations exhibit low performance in terms of equitable tutoring.
arXiv Detail & Related papers (2023-01-24T11:00:17Z) - Value Cards: An Educational Toolkit for Teaching Social Impacts of
Machine Learning through Deliberation [32.74513588794863]
Value Card is an educational toolkit to inform students and practitioners of the social impacts of different machine learning models via deliberation.
Our results suggest that the use of the Value Cards toolkit can improve students' understanding of both the technical definitions and trade-offs of performance metrics.
arXiv Detail & Related papers (2020-10-22T03:27:19Z) - Visual Interest Prediction with Attentive Multi-Task Transfer Learning [6.177155931162925]
We propose a transfer learning and attention mechanism based neural network model to predict visual interest & affective dimensions in digital photos.
Evaluation of our model on the benchmark dataset shows large improvement over current state-of-the-art systems.
arXiv Detail & Related papers (2020-05-26T14:49:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.