INSIGHT: Bridging the Student-Teacher Gap in Times of Large Language Models
- URL: http://arxiv.org/abs/2504.17677v1
- Date: Thu, 24 Apr 2025 15:47:20 GMT
- Title: INSIGHT: Bridging the Student-Teacher Gap in Times of Large Language Models
- Authors: Jarne Thys, Sebe Vanbrabant, Davy Vanacken, Gustavo Rovelo Ruiz,
- Abstract summary: INSIGHT is a proof of concept to combine various AI tools to assist teaching staff and students in the process of solving exercises.<n>We analyze students' questions to an LLM by extracting keywords, which we use to dynamically build an FAQ from students' questions.
- Score: 0.7499722271664147
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The rise of AI, especially Large Language Models, presents challenges and opportunities to integrate such technology into the classroom. AI has the potential to revolutionize education by helping teaching staff with various tasks, such as personalizing their teaching methods, but it also raises concerns, for example, about the degradation of student-teacher interactions and user privacy. This paper introduces INSIGHT, a proof of concept to combine various AI tools to assist teaching staff and students in the process of solving exercises. INSIGHT has a modular design that allows it to be integrated into various higher education courses. We analyze students' questions to an LLM by extracting keywords, which we use to dynamically build an FAQ from students' questions and provide new insights for the teaching staff to use for more personalized face-to-face support. Future work could build upon INSIGHT by using the collected data to provide adaptive learning and adjust content based on student progress and learning styles to offer a more interactive and inclusive learning experience.
Related papers
- Inclusive Education with AI: Supporting Special Needs and Tackling Language Barriers [0.0]
AI offers innovative tools to help educators create more inclusive learning environments.<n>It is discussed AI-driven language assistance tools that enable real-time translation and communication in multilingual classrooms.<n>It is explored assistive technologies powered by AI that personalize learning for students with disabilities.
arXiv Detail & Related papers (2025-04-19T00:41:58Z) - Embracing AI in Education: Understanding the Surge in Large Language Model Use by Secondary Students [53.20318273452059]
Large language models (LLMs) like OpenAI's ChatGPT have opened up new avenues in education.
Despite school restrictions, our survey of over 300 middle and high school students revealed that a remarkable 70% of students have utilized LLMs.
We propose a few ideas to address such issues, including subject-specific models, personalized learning, and AI classrooms.
arXiv Detail & Related papers (2024-11-27T19:19:34Z) - How Do Students Interact with an LLM-powered Virtual Teaching Assistant in Different Educational Settings? [3.9134031118910264]
Jill Watson, a virtual teaching assistant powered by LLMs, answers student questions and engages them in extended conversations on courseware provided by the instructors.
In this paper, we analyze student interactions with Jill across multiple courses and colleges.
We find that, by supporting a wide range of cognitive demands, Jill encourages students to engage in sophisticated, higher-order cognitive questions.
arXiv Detail & Related papers (2024-07-15T01:22:50Z) - Personality-aware Student Simulation for Conversational Intelligent Tutoring Systems [34.760230622675365]
Intelligent Tutoring Systems (ITSs) can provide personalized and self-paced learning experience.
The emergence of large language models (LLMs) further enables better human-machine interaction.
LLMs can produce diverse student responses according to the given language ability and personality traits.
arXiv Detail & Related papers (2024-04-10T06:03:13Z) - YODA: Teacher-Student Progressive Learning for Language Models [82.0172215948963]
This paper introduces YODA, a teacher-student progressive learning framework.
It emulates the teacher-student education process to improve the efficacy of model fine-tuning.
Experiments show that training LLaMA2 with data from YODA improves SFT with significant performance gain.
arXiv Detail & Related papers (2024-01-28T14:32:15Z) - Adapting Large Language Models for Education: Foundational Capabilities, Potentials, and Challenges [60.62904929065257]
Large language models (LLMs) offer possibility for resolving this issue by comprehending individual requests.
This paper reviews the recently emerged LLM research related to educational capabilities, including mathematics, writing, programming, reasoning, and knowledge-based question answering.
arXiv Detail & Related papers (2023-12-27T14:37:32Z) - Understanding Teacher Perspectives and Experiences after Deployment of
AI Literacy Curriculum in Middle-school Classrooms [12.35885897302579]
We investigate the experiences of seven teachers following their implementation of modules from the MIT RAICA curriculum.
Our analysis suggests that the AI modules expanded our teachers' knowledge in the field.
Our teachers advocated their need for better external support when navigating technological resources.
arXiv Detail & Related papers (2023-12-08T05:36:16Z) - UKP-SQuARE: An Interactive Tool for Teaching Question Answering [61.93372227117229]
The exponential growth of question answering (QA) has made it an indispensable topic in any Natural Language Processing (NLP) course.
We introduce UKP-SQuARE as a platform for QA education.
Students can run, compare, and analyze various QA models from different perspectives.
arXiv Detail & Related papers (2023-05-31T11:29:04Z) - Reinforcement Learning Tutor Better Supported Lower Performers in a Math
Task [32.6507926764587]
Reinforcement learning could be a key tool to reduce the development cost and improve the effectiveness of intelligent tutoring software.
We show that deep reinforcement learning can be used to provide adaptive pedagogical support to students learning about the concept of volume.
arXiv Detail & Related papers (2023-04-11T02:11:24Z) - Opportunities and Challenges in Neural Dialog Tutoring [54.07241332881601]
We rigorously analyze various generative language models on two dialog tutoring datasets for language learning.
We find that although current approaches can model tutoring in constrained learning scenarios, they perform poorly in less constrained scenarios.
Our human quality evaluation shows that both models and ground-truth annotations exhibit low performance in terms of equitable tutoring.
arXiv Detail & Related papers (2023-01-24T11:00:17Z) - Neural Multi-Task Learning for Teacher Question Detection in Online
Classrooms [50.19997675066203]
We build an end-to-end neural framework that automatically detects questions from teachers' audio recordings.
By incorporating multi-task learning techniques, we are able to strengthen the understanding of semantic relations among different types of questions.
arXiv Detail & Related papers (2020-05-16T02:17:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.