Automated Content Grading Using Machine Learning
- URL: http://arxiv.org/abs/2004.04300v1
- Date: Wed, 8 Apr 2020 23:46:24 GMT
- Title: Automated Content Grading Using Machine Learning
- Authors: Rahul Kr Chauhan, Ravinder Saharan, Siddhartha Singh, Priti Sharma
- Abstract summary: This research project is a primitive experiment in the automation of grading of theoretical answers written in exams by students in technical courses.
We show how the algorithmic approach in machine learning can be used to automatically examine and grade theoretical content in exam answer papers.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Grading of examination papers is a hectic, time-labor intensive task and is
often subjected to inefficiency and bias in checking. This research project is
a primitive experiment in the automation of grading of theoretical answers
written in exams by students in technical courses which yet had continued to be
human graded. In this paper, we show how the algorithmic approach in machine
learning can be used to automatically examine and grade theoretical content in
exam answer papers. Bag of words, their vectors & centroids, and a few semantic
and lexical text features have been used overall. Machine learning models have
been implemented on datasets manually built from exams given by graduating
students enrolled in technical courses. These models have been compared to show
the effectiveness of each model.
Related papers
- Automatic Generation of Behavioral Test Cases For Natural Language Processing Using Clustering and Prompting [6.938766764201549]
This paper introduces an automated approach to develop test cases by exploiting the power of large language models and statistical techniques.
We analyze the behavioral test profiles across four different classification algorithms and discuss the limitations and strengths of those models.
arXiv Detail & Related papers (2024-07-31T21:12:21Z) - Creating a Trajectory for Code Writing: Algorithmic Reasoning Tasks [0.923607423080658]
This paper describes instruments and the machine learning models used for validating them.
We have used the data collected in an introductory programming course in the penultimate week of the semester.
Preliminary research suggests ART type instruments can be combined with specific machine learning models to act as an effective learning trajectory.
arXiv Detail & Related papers (2024-04-03T05:07:01Z) - Towards Machine Unlearning Benchmarks: Forgetting the Personal
Identities in Facial Recognition Systems [4.985768723667418]
We propose a machine unlearning setting that aims to unlearn specific instance that contains personal privacy (identity) while maintaining the original task of a given model.
Specifically, we propose two machine unlearning benchmark datasets, MUFAC and MUCAC, that are greatly useful to evaluate the performance and robustness of a machine unlearning algorithm.
arXiv Detail & Related papers (2023-11-03T21:00:32Z) - Computer Aided Design and Grading for an Electronic Functional
Programming Exam [0.0]
We introduce an algorithm to check Proof Puzzles based on finding correct sequences of proof lines that improves fairness compared to an existing, edit distance based algorithm.
A higher-level language and open-source tool to specify regular expressions makes creating complex regular expressions less error-prone.
We evaluate the resulting e-exam by analyzing the degree of automation in the grading process, asking students for their opinion, and critically reviewing our own experiences.
arXiv Detail & Related papers (2023-08-14T07:08:09Z) - ALBench: A Framework for Evaluating Active Learning in Object Detection [102.81795062493536]
This paper contributes an active learning benchmark framework named as ALBench for evaluating active learning in object detection.
Developed on an automatic deep model training system, this ALBench framework is easy-to-use, compatible with different active learning algorithms, and ensures the same training and testing protocols.
arXiv Detail & Related papers (2022-07-27T07:46:23Z) - Automated Graph Machine Learning: Approaches, Libraries, Benchmarks and Directions [58.220137936626315]
This paper extensively discusses automated graph machine learning approaches.
We introduce AutoGL, our dedicated and the world's first open-source library for automated graph machine learning.
Also, we describe a tailored benchmark that supports unified, reproducible, and efficient evaluations.
arXiv Detail & Related papers (2022-01-04T18:31:31Z) - Toward Educator-focused Automated Scoring Systems for Reading and
Writing [0.0]
This paper addresses the challenges of data and label availability, authentic and extended writing, domain scoring, prompt and source variety, and transfer learning.
It employs techniques that preserve essay length as an important feature without increasing model training costs.
arXiv Detail & Related papers (2021-12-22T15:44:30Z) - Human-in-the-Loop Disinformation Detection: Stance, Sentiment, or
Something Else? [93.91375268580806]
Both politics and pandemics have recently provided ample motivation for the development of machine learning-enabled disinformation (a.k.a. fake news) detection algorithms.
Existing literature has focused primarily on the fully-automated case, but the resulting techniques cannot reliably detect disinformation on the varied topics, sources, and time scales required for military applications.
By leveraging an already-available analyst as a human-in-the-loop, canonical machine learning techniques of sentiment analysis, aspect-based sentiment analysis, and stance detection become plausible methods to use for a partially-automated disinformation detection system.
arXiv Detail & Related papers (2021-11-09T13:30:34Z) - Automated Machine Learning on Graphs: A Survey [81.21692888288658]
This paper is the first systematic and comprehensive review of automated machine learning on graphs.
We focus on hyper- parameter optimization (HPO) and neural architecture search (NAS) for graph machine learning.
In the end, we share our insights on future research directions for automated graph machine learning.
arXiv Detail & Related papers (2021-03-01T04:20:33Z) - Bayesian active learning for production, a systematic study and a
reusable library [85.32971950095742]
In this paper, we analyse the main drawbacks of current active learning techniques.
We do a systematic study on the effects of the most common issues of real-world datasets on the deep active learning process.
We derive two techniques that can speed up the active learning loop such as partial uncertainty sampling and larger query size.
arXiv Detail & Related papers (2020-06-17T14:51:11Z) - ORB: An Open Reading Benchmark for Comprehensive Evaluation of Machine
Reading Comprehension [53.037401638264235]
We present an evaluation server, ORB, that reports performance on seven diverse reading comprehension datasets.
The evaluation server places no restrictions on how models are trained, so it is a suitable test bed for exploring training paradigms and representation learning.
arXiv Detail & Related papers (2019-12-29T07:27:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.