MOON: Assisting Students in Completing Educational Notebook Scenarios
- URL: http://arxiv.org/abs/2309.16201v1
- Date: Thu, 28 Sep 2023 06:49:30 GMT
- Title: MOON: Assisting Students in Completing Educational Notebook Scenarios
- Authors: Christophe Casseau (LaBRI), Jean-R\'emy Falleri (LaBRI, IUF), Thomas
Degueule (LaBRI), Xavier Blanc (LaBRI)
- Abstract summary: Notebooks come with many attractive features, such as the ability to combine textual explanations, multimedia content, and executable code.
This execution model can quickly become an issue when students do not follow the intended execution order of the teacher.
We present a novel approach, MOON, designed to remedy this problem.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Jupyter notebooks are increasingly being adopted by teachers to deliver
interactive practical sessions to their students. Notebooks come with many
attractive features, such as the ability to combine textual explanations,
multimedia content, and executable code alongside a flexible execution model
which encourages experimentation and exploration. However, this execution model
can quickly become an issue when students do not follow the intended execution
order of the teacher, leading to errors or misleading results that hinder their
learning. To counter this adverse effect, teachers usually write detailed
instructions about how students are expected to use the notebooks. Yet, the use
of digital media is known to decrease reading efficiency and compliance with
written instructions, resulting in frequent notebook misuse and students
getting lost during practical sessions. In this article, we present a novel
approach, MOON, designed to remedy this problem. The central idea is to provide
teachers with a language that enables them to formalize the expected usage of
their notebooks in the form of a script and to interpret this script to guide
students with visual indications in real time while they interact with the
notebooks. We evaluate our approach using a randomized controlled experiment
involving 21 students, which shows that MOON helps students comply better with
the intended scenario without hindering their ability to progress. Our
follow-up user study shows that about 75% of the surveyed students perceived
MOON as rather useful or very useful.
Related papers
- Representational Alignment Supports Effective Machine Teaching [81.19197059407121]
We integrate insights from machine teaching and pragmatic communication with the literature on representational alignment.
We design a supervised learning environment that disentangles representational alignment from teacher accuracy.
arXiv Detail & Related papers (2024-06-06T17:48:24Z) - Toward In-Context Teaching: Adapting Examples to Students' Misconceptions [54.82965010592045]
We introduce a suite of models and evaluation methods we call AdapT.
AToM is a new probabilistic model for adaptive teaching that jointly infers students' past beliefs and optimize for the correctness of future beliefs.
Our results highlight both the difficulty of the adaptive teaching task and the potential of learned adaptive models for solving it.
arXiv Detail & Related papers (2024-05-07T17:05:27Z) - YODA: Teacher-Student Progressive Learning for Language Models [82.0172215948963]
This paper introduces YODA, a teacher-student progressive learning framework.
It emulates the teacher-student education process to improve the efficacy of model fine-tuning.
Experiments show that training LLaMA2 with data from YODA improves SFT with significant performance gain.
arXiv Detail & Related papers (2024-01-28T14:32:15Z) - Measuring Five Accountable Talk Moves to Improve Instruction at Scale [1.4549461207028445]
We fine-tune models to identify five instructional talk moves inspired by accountable talk theory.
We correlate the instructors' use of each talk move with indicators of student engagement and satisfaction.
These results corroborate previous research on the effectiveness of accountable talk moves.
arXiv Detail & Related papers (2023-11-02T03:04:50Z) - CodeHelp: Using Large Language Models with Guardrails for Scalable
Support in Programming Classes [2.5949084781328744]
Large language models (LLMs) have emerged recently and show great promise for providing on-demand help at a large scale.
We introduce CodeHelp, a novel LLM-powered tool designed with guardrails to provide on-demand assistance to programming students without directly revealing solutions.
Our findings suggest that CodeHelp is well-received by students who especially value its availability and help with resolving errors, and that for instructors it is easy to deploy and complements, rather than replaces, the support that they provide to students.
arXiv Detail & Related papers (2023-08-14T03:52:24Z) - Large Language Model-based System to Provide Immediate Feedback to
Students in Flipped Classroom Preparation Learning [0.0]
This study aimed to solve challenges in the flipped classroom model, such as ensuring that students are emotionally engaged and motivated to learn.
Students often have questions about the content of lecture videos in the preparation of flipped classrooms, but it is difficult for teachers to answer them immediately.
The proposed system was developed using the ChatGPT API on a video-watching support system for preparation learning that is being used in real practice.
arXiv Detail & Related papers (2023-07-21T06:59:53Z) - Can Language Models Teach Weaker Agents? Teacher Explanations Improve
Students via Personalization [84.86241161706911]
We show that teacher LLMs can indeed intervene on student reasoning to improve their performance.
We also demonstrate that in multi-turn interactions, teacher explanations generalize and learn from explained data.
We verify that misaligned teachers can lower student performance to random chance by intentionally misleading them.
arXiv Detail & Related papers (2023-06-15T17:27:20Z) - MathDial: A Dialogue Tutoring Dataset with Rich Pedagogical Properties
Grounded in Math Reasoning Problems [74.73881579517055]
We propose a framework to generate such dialogues by pairing human teachers with a Large Language Model prompted to represent common student errors.
We describe how we use this framework to collect MathDial, a dataset of 3k one-to-one teacher-student tutoring dialogues.
arXiv Detail & Related papers (2023-05-23T21:44:56Z) - A literature survey on student feedback assessment tools and their usage
in sentiment analysis [0.0]
We evaluate the effectiveness of various in-class feedback assessment methods such as Kahoot!, Mentimeter, Padlet, and polling.
We propose a sentiment analysis model for extracting the explicit suggestions from the students' qualitative feedback comments.
arXiv Detail & Related papers (2021-09-09T06:56:30Z) - Annotation Curricula to Implicitly Train Non-Expert Annotators [56.67768938052715]
voluntary studies often require annotators to familiarize themselves with the task, its annotation scheme, and the data domain.
This can be overwhelming in the beginning, mentally taxing, and induce errors into the resulting annotations.
We propose annotation curricula, a novel approach to implicitly train annotators.
arXiv Detail & Related papers (2021-06-04T09:48:28Z) - The Wits Intelligent Teaching System: Detecting Student Engagement
During Lectures Using Convolutional Neural Networks [0.30458514384586394]
The Wits Intelligent Teaching System (WITS) aims to assist lecturers with real-time feedback regarding student affect.
A CNN based on AlexNet is successfully trained and which significantly outperforms a Support Vector Machine approach.
arXiv Detail & Related papers (2021-05-28T12:59:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.