Monitor++?: Multiple versus Single Laboratory Monitors in Early
Programming Education
- URL: http://arxiv.org/abs/2108.07729v1
- Date: Fri, 13 Aug 2021 14:56:04 GMT
- Title: Monitor++?: Multiple versus Single Laboratory Monitors in Early
Programming Education
- Authors: Matthew Stephan
- Abstract summary: This paper presents an empirical study of an introductory-level programming course with students using multiple monitors.
It compares their performance and self-reported experiences versus students using a single monitor.
- Score: 4.797216015572358
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: CONTRIBUTION: This paper presents an empirical study of an introductory-level
programming course with students using multiple monitors and compares their
performance and self-reported experiences versus students using a single
monitor. BACKGROUND: Professional-level programming in many technological
fields often employs multiple-monitors stations, however, some education
laboratories employ single-monitor stations. This is unrepresentative of what
students will encounter in practice and experiential learning. RESEARCH
QUESTIONS: This study aims to answer three research questions. The questions
include discovering the experiential observations of the students, contrasting
the performance of the students using one monitor versus those using two
monitors, and an investigation of the ways in which multiple monitors were
employed by the students. METHODOLOGY: Half of the students in the study had
access to multiple monitors. This was the only difference between the two study
groups. This study contrasts grade medians and conducts median-test evaluation.
Additionally, an experience survey facilitated likert-scale values and
open-ended feedback questions facilitated textual analysis. Limitations of the
study include the small sample size (86 students) and lack of control of
participant composition. FINDINGS: Students reacted very favorably in rating
their experience using the intervention. Overall, the multiple-monitor group
had a slight performance improvement. Most improvement was in software-design
and graphics assignments. Performance increased statistically significantly on
the interfaces-and-hierarchies labs. Students used multiple-monitors in
different ways including reference guides, assignment specifications, and more.
Related papers
- Further Evidence on a Controversial Topic about Human-Based Experiments: Professionals vs. Students [3.358019319437577]
We compare 62 students and 42 software professionals on a bug-fixing task on the same Java program.<n>Considering the differences between the two groups of participants, the gathered data show that the students outperformed the professionals in fixing bugs.
arXiv Detail & Related papers (2025-06-13T09:05:36Z) - WIP: Exploring the Value of a Debugging Cheat Sheet and Mini Lecture in Improving Undergraduate Debugging Skills and Mindset [0.0]
This work-in-progress research paper explores the efficacy of a small-scale microelectronics debug education intervention utilizing quasi-experimental design.<n>Students in the experimental group were faster by an average of 1:43 and had a 7 percent higher success rate than the control group.
arXiv Detail & Related papers (2025-06-12T22:19:50Z) - CLGT: A Graph Transformer for Student Performance Prediction in
Collaborative Learning [6.140954034246379]
We present an extended graph transformer framework for collaborative learning (CLGT) for evaluating and predicting the performance of students.
The experimental results confirm that the proposed CLGT outperforms the baseline models in terms of performing predictions based on the real-world datasets.
arXiv Detail & Related papers (2023-07-30T09:54:30Z) - AssistGPT: A General Multi-modal Assistant that can Plan, Execute,
Inspect, and Learn [25.510696745075688]
We propose a multi-modal AI assistant, AssistGPT, with an interleaved code and language reasoning approach called Plan, Execute, Inspect, and Learn.
The Planner is capable of using natural language to plan which tool in Executor should do next based on the current reasoning progress.
We conducted experiments on A-OKVQA and NExT-QA benchmarks, achieving state-of-the-art results.
arXiv Detail & Related papers (2023-06-14T17:12:56Z) - Using EEG Signals to Assess Workload during Memory Retrieval in a
Real-world Scenario [3.9763527484363292]
This study investigated the associations between memory workload and EEG during participants' typical office tasks.
We used EEG band power, mutual information, and coherence as features to train machine learning models to classify high versus low memory workload states.
arXiv Detail & Related papers (2023-05-14T02:01:54Z) - Unified Demonstration Retriever for In-Context Learning [56.06473069923567]
Unified Demonstration Retriever (textbfUDR) is a single model to retrieve demonstrations for a wide range of tasks.
We propose a multi-task list-wise ranking training framework, with an iterative mining strategy to find high-quality candidates.
Experiments on 30+ tasks across 13 task families and multiple data domains show that UDR significantly outperforms baselines.
arXiv Detail & Related papers (2023-05-07T16:07:11Z) - Dynamic Contrastive Distillation for Image-Text Retrieval [90.05345397400144]
We present a novel plug-in dynamic contrastive distillation (DCD) framework to compress image-text retrieval models.
We successfully apply our proposed DCD strategy to two state-of-the-art vision-language pretrained models, i.e. ViLT and METER.
Experiments on MS-COCO and Flickr30K benchmarks show the effectiveness and efficiency of our DCD framework.
arXiv Detail & Related papers (2022-07-04T14:08:59Z) - Benchmarking Safety Monitors for Image Classifiers with Machine Learning [0.0]
High-accurate machine learning (ML) image classifiers cannot guarantee that they will not fail at operation.
The use of fault tolerance mechanisms such as safety monitors is a promising direction to keep the system in a safe state.
This paper aims at establishing a baseline framework for benchmarking monitors for ML image classifiers.
arXiv Detail & Related papers (2021-10-04T07:52:23Z) - StudyMe: A New Mobile App for User-Centric N-of-1 Trials [68.8204255655161]
N-of-1 trials are multi-crossover self-experiments that allow individuals to systematically evaluate the effect of interventions on their personal health goals.
We present StudyMe, an open-source mobile application that is freely available from https://play.google.com/store/apps/details?id=health.studyu.me.
arXiv Detail & Related papers (2021-07-31T20:43:36Z) - Do Different Tracking Tasks Require Different Appearance Models? [118.02175542476367]
We present UniTrack, a unified tracking solution to address five different tasks within the same framework.
UniTrack consists of a single and task-agnostic appearance model, which can be learned in a supervised or self-supervised fashion.
We show how most tracking tasks can be solved within this framework, and that the same appearance model can be used to obtain performance that is competitive against specialised methods for all the five tasks considered.
arXiv Detail & Related papers (2021-07-05T17:40:17Z) - Learning to Track Instances without Video Annotations [85.9865889886669]
We introduce a novel semi-supervised framework by learning instance tracking networks with only a labeled image dataset and unlabeled video sequences.
We show that even when only trained with images, the learned feature representation is robust to instance appearance variations.
In addition, we integrate this module into single-stage instance segmentation and pose estimation frameworks.
arXiv Detail & Related papers (2021-04-01T06:47:41Z) - Multi-modal Visual Tracking: Review and Experimental Comparison [85.20414397784937]
We summarize the multi-modal tracking algorithms, especially visible-depth (RGB-D) tracking and visible-thermal (RGB-T) tracking.
We conduct experiments to analyze the effectiveness of trackers on five datasets.
arXiv Detail & Related papers (2020-12-08T02:39:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.