Can AI Assistance Aid in the Grading of Handwritten Answer Sheets?
- URL: http://arxiv.org/abs/2408.12870v1
- Date: Fri, 23 Aug 2024 07:00:25 GMT
- Title: Can AI Assistance Aid in the Grading of Handwritten Answer Sheets?
- Authors: Pritam Sil, Parag Chaudhuri, Bhaskaran Raman,
- Abstract summary: This work introduces an AI-assisted grading pipeline.
The pipeline first uses text detection to automatically detect question regions present in a question paper PDF.
Next, it uses SOTA text detection methods to highlight important keywords present in the handwritten answer regions of scanned answer sheets to assist in the grading process.
- Score: 2.025468874117372
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With recent advancements in artificial intelligence (AI), there has been growing interest in using state of the art (SOTA) AI solutions to provide assistance in grading handwritten answer sheets. While a few commercial products exist, the question of whether AI-assistance can actually reduce grading effort and time has not yet been carefully considered in published literature. This work introduces an AI-assisted grading pipeline. The pipeline first uses text detection to automatically detect question regions present in a question paper PDF. Next, it uses SOTA text detection methods to highlight important keywords present in the handwritten answer regions of scanned answer sheets to assist in the grading process. We then evaluate a prototype implementation of the AI-assisted grading pipeline deployed on an existing e-learning management platform. The evaluation involves a total of 5 different real-life examinations across 4 different courses at a reputed institute; it consists of a total of 42 questions, 17 graders, and 468 submissions. We log and analyze the grading time for each handwritten answer while using AI assistance and without it. Our evaluations have shown that, on average, the graders take 31% less time while grading a single response and 33% less grading time while grading a single answer sheet using AI assistance.
Related papers
- Evaluating GPT-4 at Grading Handwritten Solutions in Math Exams [48.99818550820575]
We leverage state-of-the-art multi-modal AI models, in particular GPT-4o, to automatically grade handwritten responses to college-level math exams.
Using real student responses to questions in a probability theory exam, we evaluate GPT-4o's alignment with ground-truth scores from human graders using various prompting techniques.
arXiv Detail & Related papers (2024-11-07T22:51:47Z) - A Multi-Year Grey Literature Review on AI-assisted Test Automation [46.97326049485643]
Test Automation (TA) techniques are crucial for quality assurance in software engineering.
TA techniques face limitations such as high test suite maintenance costs and the need for extensive programming skills.
Artificial Intelligence (AI) offers new opportunities to address these issues through automation and improved practices.
arXiv Detail & Related papers (2024-08-12T15:26:36Z) - Could ChatGPT get an Engineering Degree? Evaluating Higher Education Vulnerability to AI Assistants [175.9723801486487]
We evaluate whether two AI assistants, GPT-3.5 and GPT-4, can adequately answer assessment questions.
GPT-4 answers an average of 65.8% of questions correctly, and can even produce the correct answer across at least one prompting strategy for 85.1% of questions.
Our results call for revising program-level assessment design in higher education in light of advances in generative AI.
arXiv Detail & Related papers (2024-08-07T12:11:49Z) - Artificial intelligence to automate the systematic review of scientific
literature [0.0]
We present a survey of AI techniques proposed in the last 15 years to help researchers conduct systematic analyses of scientific literature.
We describe the tasks currently supported, the types of algorithms applied, and available tools proposed in 34 primary studies.
arXiv Detail & Related papers (2024-01-13T19:12:49Z) - Automatic Prompt Optimization with "Gradient Descent" and Beam Search [64.08364384823645]
Large Language Models (LLMs) have shown impressive performance as general purpose agents, but their abilities remain highly dependent on prompts.
We propose a simple and nonparametric solution to this problem, Automatic Prompt Optimization (APO)
APO uses minibatches of data to form natural language "gradients" that criticize the current prompt.
The gradients are then "propagated" into the prompt by editing the prompt in the opposite semantic direction of the gradient.
arXiv Detail & Related papers (2023-05-04T15:15:22Z) - Automated Reading Passage Generation with OpenAI's Large Language Model [0.0]
This paper utilizes OpenAI's latest transformer-based language model, GPT-3, to generate reading passages.
Existing reading passages were used in carefully engineered prompts to ensure the AI-generated text has similar content and structure to a fourth-grade reading passage.
All AI-generated passages, along with original passages were evaluated by human judges according to their coherence, appropriateness to fourth graders, and readability.
arXiv Detail & Related papers (2023-04-10T14:30:39Z) - Collaboration with Conversational AI Assistants for UX Evaluation:
Questions and How to Ask them (Voice vs. Text) [18.884080068561843]
We conducted a Wizard-of-Oz design probe study with 20 participants who interacted with simulated AI assistants via text or voice.
We found that participants asked for five categories of information: user actions, user mental model, help from the AI assistant, product and task information, and user demographics.
The text assistant was perceived as significantly more efficient, but both were rated equally in satisfaction and trust.
arXiv Detail & Related papers (2023-03-07T03:59:14Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Effidit: Your AI Writing Assistant [60.588370965898534]
Effidit is a digital writing assistant that facilitates users to write higher-quality text more efficiently by using artificial intelligence (AI) technologies.
In Effidit, we significantly expand the capacities of a writing assistant by providing functions in five categories: text completion, error checking, text polishing, keywords to sentences (K2S), and cloud input methods (cloud IME)
arXiv Detail & Related papers (2022-08-03T02:24:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.