Cognitively Aided Zero-Shot Automatic Essay Grading
- URL: http://arxiv.org/abs/2102.11258v1
- Date: Mon, 22 Feb 2021 18:41:59 GMT
- Title: Cognitively Aided Zero-Shot Automatic Essay Grading
- Authors: Sandeep Mathias, Rudra Murthy, Diptesh Kanojia, and Pushpak
Bhattacharyya
- Abstract summary: We describe a solution to the problem of zero-shot automatic essay grading, using cognitive information, in the form of gaze behaviour.
Our experiments show that using gaze behaviour helps in improving the performance of AEG systems, especially when we provide a new essay written in response to a new prompt for scoring, by an average of almost 5 percentage points of QWK.
- Score: 25.772899595946416
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic essay grading (AEG) is a process in which machines assign a grade
to an essay written in response to a topic, called the prompt. Zero-shot AEG is
when we train a system to grade essays written to a new prompt which was not
present in our training data. In this paper, we describe a solution to the
problem of zero-shot automatic essay grading, using cognitive information, in
the form of gaze behaviour. Our experiments show that using gaze behaviour
helps in improving the performance of AEG systems, especially when we provide a
new essay written in response to a new prompt for scoring, by an average of
almost 5 percentage points of QWK.
Related papers
- Hey AI Can You Grade My Essay?: Automatic Essay Grading [1.03590082373586]
We introduce a new model that outperforms the state-of-the-art models in the field of automatic essay grading (AEG)
We have used the concept of collaborative and transfer learning, where one network will be responsible for checking the grammatical and structural features of the sentences of an essay while another network is responsible for scoring the overall idea present in the essay.
Our proposed model has shown the highest accuracy of 85.50%.
arXiv Detail & Related papers (2024-10-12T01:17:55Z) - Transformer-based Joint Modelling for Automatic Essay Scoring and Off-Topic Detection [3.609048819576875]
We are proposing an unsupervised technique that jointly scores essays and detects off-topic essays.
Our proposed method outperforms the baseline we created and earlier conventional methods on two essay-scoring datasets.
arXiv Detail & Related papers (2024-03-24T21:44:14Z) - Review of feedback in Automated Essay Scoring [6.445605125467574]
The first automated essay scoring system was developed 50 years ago.
This paper reviews research on feedback including different feedback types and essay traits on automated essay scoring.
arXiv Detail & Related papers (2023-07-09T11:04:13Z) - Prompt- and Trait Relation-aware Cross-prompt Essay Trait Scoring [3.6825890616838066]
Automated essay scoring (AES) aims to score essays written for a given prompt, which defines the writing topic.
Most existing AES systems assume to grade essays of the same prompt as used in training and assign only a holistic score.
We propose a robust model: prompt- and trait relation-aware cross-prompt essay trait scorer.
arXiv Detail & Related papers (2023-05-26T11:11:19Z) - TEMPERA: Test-Time Prompting via Reinforcement Learning [57.48657629588436]
We propose Test-time Prompt Editing using Reinforcement learning (TEMPERA)
In contrast to prior prompt generation methods, TEMPERA can efficiently leverage prior knowledge.
Our method achieves 5.33x on average improvement in sample efficiency when compared to the traditional fine-tuning methods.
arXiv Detail & Related papers (2022-11-21T22:38:20Z) - Many Hands Make Light Work: Using Essay Traits to Automatically Score
Essays [41.851075178681015]
We describe a way to score essays holistically using a multi-task learning (MTL) approach.
We compare our results with a single-task learning (STL) approach, using both LSTMs and BiLSTMs.
We find that MTL-based BiLSTM system gives the best results for scoring the essay holistically, as well as performing well on scoring the essay traits.
arXiv Detail & Related papers (2021-02-01T11:31:09Z) - My Teacher Thinks The World Is Flat! Interpreting Automatic Essay
Scoring Mechanism [71.34160809068996]
Recent work shows that automated scoring systems are prone to even common-sense adversarial samples.
We utilize recent advances in interpretability to find the extent to which features such as coherence, content and relevance are important for automated scoring mechanisms.
We also find that since the models are not semantically grounded with world-knowledge and common sense, adding false facts such as the world is flat'' actually increases the score instead of decreasing it.
arXiv Detail & Related papers (2020-12-27T06:19:20Z) - Get It Scored Using AutoSAS -- An Automated System for Scoring Short
Answers [63.835172924290326]
We present a fast, scalable, and accurate approach towards automated Short Answer Scoring (SAS)
We propose and explain the design and development of a system for SAS, namely AutoSAS.
AutoSAS shows state-of-the-art performance and achieves better results by over 8% in some of the question prompts.
arXiv Detail & Related papers (2020-12-21T10:47:30Z) - Knowledge Distillation for Improved Accuracy in Spoken Question
Answering [63.72278693825945]
We devise a training strategy to perform knowledge distillation from spoken documents and written counterparts.
Our work makes a step towards distilling knowledge from the language model as a supervision signal.
Experiments demonstrate that our approach outperforms several state-of-the-art language models on the Spoken-SQuAD dataset.
arXiv Detail & Related papers (2020-10-21T15:18:01Z) - Evaluation Toolkit For Robustness Testing Of Automatic Essay Scoring
Systems [64.4896118325552]
We evaluate the current state-of-the-art AES models using a model adversarial evaluation scheme and associated metrics.
We find that AES models are highly overstable. Even heavy modifications(as much as 25%) with content unrelated to the topic of the questions do not decrease the score produced by the models.
arXiv Detail & Related papers (2020-07-14T03:49:43Z) - Improving Readability for Automatic Speech Recognition Transcription [50.86019112545596]
We propose a novel NLP task called ASR post-processing for readability (APR)
APR aims to transform the noisy ASR output into a readable text for humans and downstream tasks while maintaining the semantic meaning of the speaker.
We compare fine-tuned models based on several open-sourced and adapted pre-trained models with the traditional pipeline method.
arXiv Detail & Related papers (2020-04-09T09:26:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.