ICLF: An Immersive Code Learning Framework based on Git for Teaching and Evaluating Student Programming Projects
- URL: http://arxiv.org/abs/2601.14814v1
- Date: Wed, 21 Jan 2026 09:39:17 GMT
- Title: ICLF: An Immersive Code Learning Framework based on Git for Teaching and Evaluating Student Programming Projects
- Authors: Pierre Schaus, Guillaume Derval, Augustin Delecluse,
- Abstract summary: The Immersive Code Learning Framework (ICLF) is a scalable Git-based organizational pipeline for managing and evaluating student programming projects.<n>Students begin with an existing code base, a practice that is crucial for mirroring real-world software development.<n>Students are invited collaborators on private forks of this intermediate repository, possibly updated throughout the semester whenever the teacher changes the parent repository.
- Score: 2.030202554182725
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Programming projects are essential in computer science education for bridging theory with practice and introducing students to tools like Git, IDEs, and debuggers. However, designing and evaluating these projects (especially in MOOCs)can be challenging. We propose the Immersive Code Learning Framework (ICLF), a scalable Git-based organizational pipeline for managing and evaluating student programming project. Students begin with an existing code base, a practice that is crucial for mirroring real-world software development. Students then iteratively complete tasks that pass predefined tests. The instructor only manages a hidden parent repository containing solutions, which is used to generate an intermediate public repository with these solutions removed via a templating system. Students are invited collaborators on private forks of this intermediate repository, possibly updated throughout the semester whenever the teacher changes the parent repository. This approach reduces grading platform dependency, supports automated feedback, and allows the project to evolve without disrupting student work. Successfully tested over several years, including in an edX MOOC, this organizational pipeline provides transparent evaluation, plagiarism detection, and continuous progress tracking for each student.
Related papers
- ABC-Bench: Benchmarking Agentic Backend Coding in Real-World Development [72.4729759618632]
We introduce ABC-Bench, a benchmark to evaluate agentic backend coding within a realistic, executable workflow.<n>We curated 224 practical tasks spanning 8 languages and 19 frameworks from open-source repositories.<n>Our evaluation reveals that even state-of-the-art models struggle to deliver reliable performance on these holistic tasks.
arXiv Detail & Related papers (2026-01-16T08:23:52Z) - SwingArena: Competitive Programming Arena for Long-context GitHub Issue Solving [90.32201622392137]
We present SwingArena, a competitive evaluation framework for Large Language Models (LLMs)<n>Unlike traditional static benchmarks, SwingArena models the collaborative process of software by pairing LLMs as iterations, who generate patches, and reviewers, who create test cases and verify the patches through continuous integration (CI) pipelines.
arXiv Detail & Related papers (2025-05-29T18:28:02Z) - LLM Contribution Summarization in Software Projects [0.0]
This paper addresses the need for an automated and objective approach to evaluate individual contributions within team projects.<n>We present a tool that leverages a large language model (LLM) to automatically summarize code contributions extracted from version control repositories.<n>The tool was assessed over two semesters during a three-week, full-time software development sprint involving 65 students.
arXiv Detail & Related papers (2025-05-23T10:26:43Z) - Paper2Code: Automating Code Generation from Scientific Papers in Machine Learning [70.04746094652653]
We introduce PaperCoder, a framework that transforms machine learning papers into functional code repositories.<n>PaperCoder operates in three stages: planning, designs the system architecture with diagrams, identifies file dependencies, and generates configuration files.<n>We then evaluate PaperCoder on generating code implementations from machine learning papers based on both model-based and human evaluations.
arXiv Detail & Related papers (2025-04-24T01:57:01Z) - Innovating the software engineering class through multi-team development [0.0]
This paper presents a new approach to teaching undergraduate software engineering.<n>The students are grouped into multiple software teams, each focusing on a different aspect of the app.<n>Using an Agile development approach, the teams incrementally add to the code base and demonstrate features as the application evolves.
arXiv Detail & Related papers (2025-02-04T18:54:43Z) - RepoGraph: Enhancing AI Software Engineering with Repository-level Code Graph [63.87660059104077]
We present RepoGraph, a plug-in module that manages a repository-level structure for modern AI software engineering solutions.<n>RepoGraph substantially boosts the performance of all systems, leading to a new state-of-the-art among open-source frameworks.
arXiv Detail & Related papers (2024-10-03T05:45:26Z) - GitSEED: A Git-backed Automated Assessment Tool for Software Engineering and Programming Education [0.0]
This paper introduces GitSEED, a language-agnostic automated assessment tool designed for Programming Education and Software Engineering (SE)
Using GitSEED, students in Computer Science (CS) and SE can master the fundamentals of git while receiving personalized feedback on their programming assignments and projects.
Our experiments assess GitSEED's efficacy via comprehensive user evaluation, examining the impact of feedback mechanisms and features on student learning outcomes.
arXiv Detail & Related papers (2024-09-11T15:50:42Z) - WIP: A Unit Testing Framework for Self-Guided Personalized Online Robotics Learning [3.613641107321095]
This paper focuses on creating a system for unit testing while integrating it into the course workflow.
In line with the framework's personalized student-centered approach, this method makes it easier for students to revise, and debug their programming work.
The course workflow updated to include unit tests will strengthen the learning environment and make it more interactive so that students can learn how to program robots in a self-guided fashion.
arXiv Detail & Related papers (2024-05-18T00:56:46Z) - Prompting Large Language Models to Tackle the Full Software Development Lifecycle: A Case Study [72.24266814625685]
We explore the performance of large language models (LLMs) across the entire software development lifecycle with DevEval.<n>DevEval features four programming languages, multiple domains, high-quality data collection, and carefully designed and verified metrics for each task.<n> Empirical studies show that current LLMs, including GPT-4, fail to solve the challenges presented within DevEval.
arXiv Detail & Related papers (2024-03-13T15:13:44Z) - SWE-bench: Can Language Models Resolve Real-World GitHub Issues? [80.52201658231895]
SWE-bench is an evaluation framework consisting of $2,294$ software engineering problems drawn from real GitHub issues and corresponding pull requests across $12$ popular Python repositories.
We show that both state-of-the-art proprietary models and our fine-tuned model SWE-Llama can resolve only the simplest issues.
arXiv Detail & Related papers (2023-10-10T16:47:29Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Student Teamwork on Programming Projects: What can GitHub logs show us? [3.764846583322767]
We collected GitHub logs from two programming projects in two offerings of a CS2 Java programming course for computer science majors.
Students worked in pairs for both projects (one optional, the other mandatory) in each year.
We can identify the students' teamwork style automatically from their submission logs.
arXiv Detail & Related papers (2020-08-25T20:41:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.