Automated Computer Program Evaluation and Projects -- Our Experiences
- URL: http://arxiv.org/abs/2404.04521v1
- Date: Sat, 6 Apr 2024 06:42:58 GMT
- Title: Automated Computer Program Evaluation and Projects -- Our Experiences
- Authors: Bama Srinivasan, Mala Nehru, Ranjani Parthasarathi, Saswati Mukherjee, Jeena A Thankachan,
- Abstract summary: We describe the details of how we set up the tools and customized those for computer science courses.
Based on our experiences, we also provide a few insights on using these tools for effective learning.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper provides a few approaches to automating computer programming and project submission tasks, that we have been following for the last six years and have found to be successful. The approaches include using CodeRunner with Learning Management System (LMS) integration for programming practice and evaluation, and Git (GitHub) for project submissions and automatic code evaluation. In this paper, we describe the details of how we set up the tools and customized those for computer science courses. Based on our experiences, we also provide a few insights on using these tools for effective learning.
Related papers
- On the Opportunities of Large Language Models for Programming Process Data [6.023152721616896]
We discuss opportunities of using large language models for analyzing programming process data.
To complement our discussion, we outline a case study where we have leveraged LLMs for automatically summarizing the programming process.
arXiv Detail & Related papers (2024-11-01T07:20:01Z) - GitSEED: A Git-backed Automated Assessment Tool for Software Engineering and Programming Education [0.0]
This paper introduces GitSEED, a language-agnostic automated assessment tool designed for Programming Education and Software Engineering (SE)
Using GitSEED, students in Computer Science (CS) and SE can master the fundamentals of git while receiving personalized feedback on their programming assignments and projects.
Our experiments assess GitSEED's efficacy via comprehensive user evaluation, examining the impact of feedback mechanisms and features on student learning outcomes.
arXiv Detail & Related papers (2024-09-11T15:50:42Z) - Spider2-V: How Far Are Multimodal Agents From Automating Data Science and Engineering Workflows? [73.81908518992161]
We introduce Spider2-V, the first multimodal agent benchmark focusing on professional data science and engineering.
Spider2-V features real-world tasks in authentic computer environments and incorporating 20 enterprise-level professional applications.
These tasks evaluate the ability of a multimodal agent to perform data-related tasks by writing code and managing the GUI in enterprise data software systems.
arXiv Detail & Related papers (2024-07-15T17:54:37Z) - Automatic Programming: Large Language Models and Beyond [48.34544922560503]
We study concerns around code quality, security and related issues of programmer responsibility.
We discuss how advances in software engineering can enable automatic programming.
We conclude with a forward looking view, focusing on the programming environment of the near future.
arXiv Detail & Related papers (2024-05-03T16:19:24Z) - Charting a Path to Efficient Onboarding: The Role of Software
Visualization [49.1574468325115]
The present study aims to explore the familiarity of managers, leaders, and developers with software visualization tools.
This approach incorporated quantitative and qualitative analyses of data collected from practitioners using questionnaires and semi-structured interviews.
arXiv Detail & Related papers (2024-01-17T21:30:45Z) - Intelligent Tutoring System: Experience of Linking Software Engineering
and Programming Teaching [11.732008724228798]
Existing systems that handle automated grading primarily focus on the automation of test case executions.
We have built an intelligent tutoring system that has the capability of providing automated feedback and grading.
arXiv Detail & Related papers (2023-10-09T07:28:41Z) - Building an Effective Automated Assessment System for C/C++ Introductory
Programming Courses in ODL Environment [0.0]
Traditional ways of assessing students' work are becoming insufficient in terms of both time and effort.
In distance education environment, such assessments become additionally more challenging in terms of hefty remuneration for hiring large number of tutors.
We identify different components that we believe are necessary in building an effective automated assessment system.
arXiv Detail & Related papers (2022-05-24T09:20:43Z) - Cria\c{c}\~ao e aplica\c{c}\~ao de ferramenta para auxiliar no ensino de
algoritmos e programa\c{c}\~ao de computadores [0.0]
This work aims to report the development of a teaching tool developed during the monitoring program of the Algorithm and Computer Programming discipline of the University of Fortaleza.
The tool combines the knowledge acquired in the books, with a language closer to the students, using video lessons and exercises proposed, with all the content available on the internet.
arXiv Detail & Related papers (2022-03-31T09:48:49Z) - Lifelong Learning Metrics [63.8376359764052]
The DARPA Lifelong Learning Machines (L2M) program seeks to yield advances in artificial intelligence (AI) systems.
This document outlines a formalism for constructing and characterizing the performance of agents performing lifelong learning scenarios.
arXiv Detail & Related papers (2022-01-20T16:29:14Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Machine Learning for Software Engineering: A Systematic Mapping [73.30245214374027]
The software development industry is rapidly adopting machine learning for transitioning modern day software systems towards highly intelligent and self-learning systems.
No comprehensive study exists that explores the current state-of-the-art on the adoption of machine learning across software engineering life cycle stages.
This study introduces a machine learning for software engineering (MLSE) taxonomy classifying the state-of-the-art machine learning techniques according to their applicability to various software engineering life cycle stages.
arXiv Detail & Related papers (2020-05-27T11:56:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.