Fuzzy Intelligent System for Student Software Project Evaluation
- URL: http://arxiv.org/abs/2405.00453v1
- Date: Wed, 1 May 2024 11:12:22 GMT
- Title: Fuzzy Intelligent System for Student Software Project Evaluation
- Authors: Anna Ogorodova, Pakizar Shamoi, Aron Karatayev,
- Abstract summary: This paper introduces a fuzzy intelligent system designed to evaluate academic software projects.
The system processes the input criteria to produce a quantifiable measure of project success.
Our approach standardizes project evaluations and helps to reduce the subjective bias in manual grading.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Developing software projects allows students to put knowledge into practice and gain teamwork skills. However, assessing student performance in project-oriented courses poses significant challenges, particularly as the size of classes increases. The current paper introduces a fuzzy intelligent system designed to evaluate academic software projects using object-oriented programming and design course as an example. To establish evaluation criteria, we first conducted a survey of student project teams (n=31) and faculty (n=3) to identify key parameters and their applicable ranges. The selected criteria - clean code, use of inheritance, and functionality - were selected as essential for assessing the quality of academic software projects. These criteria were then represented as fuzzy variables with corresponding fuzzy sets. Collaborating with three experts, including one professor and two course instructors, we defined a set of fuzzy rules for a fuzzy inference system. This system processes the input criteria to produce a quantifiable measure of project success. The system demonstrated promising results in automating the evaluation of projects. Our approach standardizes project evaluations and helps to reduce the subjective bias in manual grading.
Related papers
- How fair are we? From conceptualization to automated assessment of fairness definitions [6.741000368514124]
MODNESS is a model-driven approach for user-defined fairness concepts in software systems.
It generates the source code to implement fair assessment based on these custom definitions.
Our findings reveal that most of the current approaches do not support user-defined fairness concepts.
arXiv Detail & Related papers (2024-04-15T16:46:17Z) - Identifying Student Profiles Within Online Judge Systems Using
Explainable Artificial Intelligence [6.638206014723678]
Online Judge (OJ) systems are typically considered within programming-related courses as they yield fast and objective assessments of the code developed by the students.
This work aims to tackle this limitation by considering the further exploitation of the information gathered by the OJ and automatically inferring feedback for both the student and the instructor.
arXiv Detail & Related papers (2024-01-29T12:11:30Z) - Hierarchical Programmatic Reinforcement Learning via Learning to Compose
Programs [58.94569213396991]
We propose a hierarchical programmatic reinforcement learning framework to produce program policies.
By learning to compose programs, our proposed framework can produce program policies that describe out-of-distributionally complex behaviors.
The experimental results in the Karel domain show that our proposed framework outperforms baselines.
arXiv Detail & Related papers (2023-01-30T14:50:46Z) - Reinforcement Learning with Success Induced Task Prioritization [68.8204255655161]
We introduce Success Induced Task Prioritization (SITP), a framework for automatic curriculum learning.
The algorithm selects the order of tasks that provide the fastest learning for agents.
We demonstrate that SITP matches or surpasses the results of other curriculum design methods.
arXiv Detail & Related papers (2022-12-30T12:32:43Z) - A Multicriteria Evaluation for Data-Driven Programming Feedback Systems:
Accuracy, Effectiveness, Fallibility, and Students' Response [7.167352606079407]
Data-driven programming feedback systems can help novices to program in the absence of a human tutor.
Prior evaluations showed that these systems improve learning in terms of test scores, or task completion efficiency.
These aspects include inherent fallibility of current state-of-the-art, students' programming behavior in response to correct/incorrect feedback, and effective/ineffective system components.
arXiv Detail & Related papers (2022-07-27T00:29:32Z) - Learning Program Semantics with Code Representations: An Empirical Study [22.953964699210296]
Program semantics learning is the core and fundamental for various code intelligent tasks.
We categorize current mainstream code representation techniques into four categories.
We evaluate its performance on three diverse and popular code intelligent tasks.
arXiv Detail & Related papers (2022-03-22T14:51:44Z) - Lifelong Learning Metrics [63.8376359764052]
The DARPA Lifelong Learning Machines (L2M) program seeks to yield advances in artificial intelligence (AI) systems.
This document outlines a formalism for constructing and characterizing the performance of agents performing lifelong learning scenarios.
arXiv Detail & Related papers (2022-01-20T16:29:14Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Automatic Assessment of the Design Quality of Python Programs with
Personalized Feedback [0.0]
We propose a neural network model to assess the design of a program and provide personalized feedback to guide students on how to make corrections.
The model's effectiveness is evaluated on a corpus of student programs written in Python.
Students who participated in the study improved their program design scores by 19.58%.
arXiv Detail & Related papers (2021-06-02T18:04:53Z) - CausalWorld: A Robotic Manipulation Benchmark for Causal Structure and
Transfer Learning [138.40338621974954]
CausalWorld is a benchmark for causal structure and transfer learning in a robotic manipulation environment.
Tasks consist of constructing 3D shapes from a given set of blocks - inspired by how children learn to build complex structures.
arXiv Detail & Related papers (2020-10-08T23:01:13Z) - Overview of the TREC 2019 Fair Ranking Track [65.15263872493799]
The goal of the TREC Fair Ranking track was to develop a benchmark for evaluating retrieval systems in terms of fairness to different content providers.
This paper presents an overview of the track, including the task definition, descriptions of the data and the annotation process.
arXiv Detail & Related papers (2020-03-25T21:34:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.