Text similarity analysis for evaluation of descriptive answers
- URL: http://arxiv.org/abs/2105.02935v1
- Date: Thu, 6 May 2021 20:19:58 GMT
- Title: Text similarity analysis for evaluation of descriptive answers
- Authors: Vedant Bahel and Achamma Thomas
- Abstract summary: This paper proposes a text analysis based automated approach for automatic evaluation of the descriptive answers in an examination.
In this architecture, the examiner creates a sample answer sheet for given sets of question.
By using the concept of text summarization, text semantics and keywords summarization, the final score for each answer is calculated.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Keeping in mind the necessity of intelligent system in educational sector,
this paper proposes a text analysis based automated approach for automatic
evaluation of the descriptive answers in an examination. In particular, the
research focuses on the use of intelligent concepts of Natural Language
Processing and Data Mining for computer aided examination evaluation system.
The paper present an architecture for fair evaluation of answer sheet. In this
architecture, the examiner creates a sample answer sheet for given sets of
question. By using the concept of text summarization, text semantics and
keywords summarization, the final score for each answer is calculated. The text
similarity model is based on Siamese Manhattan LSTM (MaLSTM). The results of
this research were compared to manually graded assignments and other existing
system. This approach was found to be very efficient in order to be implemented
in an institution or in an university.
Related papers
- A Comparative Study of Quality Evaluation Methods for Text Summarization [0.5512295869673147]
This paper proposes a novel method based on large language models (LLMs) for evaluating text summarization.
Our results show that LLMs evaluation aligns closely with human evaluation, while widely-used automatic metrics such as ROUGE-2, BERTScore, and SummaC do not and also lack consistency.
arXiv Detail & Related papers (2024-06-30T16:12:37Z) - Automatic assessment of text-based responses in post-secondary
education: A systematic review [0.0]
There is immense potential to automate rapid assessment and feedback of text-based responses in education.
To understand how text-based automatic assessment systems have been developed and applied in education in recent years, three research questions are considered.
This systematic review provides an overview of recent educational applications of text-based assessment systems.
arXiv Detail & Related papers (2023-08-30T17:16:45Z) - Large Language Models are Diverse Role-Players for Summarization
Evaluation [82.31575622685902]
A document summary's quality can be assessed by human annotators on various criteria, both objective ones like grammar and correctness, and subjective ones like informativeness, succinctness, and appeal.
Most of the automatic evaluation methods like BLUE/ROUGE may be not able to adequately capture the above dimensions.
We propose a new evaluation framework based on LLMs, which provides a comprehensive evaluation framework by comparing generated text and reference text from both objective and subjective aspects.
arXiv Detail & Related papers (2023-03-27T10:40:59Z) - ProtSi: Prototypical Siamese Network with Data Augmentation for Few-Shot
Subjective Answer Evaluation [0.8959391124399926]
ProtSi Network is a unique semi-supervised architecture that for the first time uses few-shot learning to subjective answer evaluation.
We employ an unsupervised diverse paraphrasing model ProtAugment, in order to prevent overfitting for effective few-shot text classification.
arXiv Detail & Related papers (2022-11-17T19:33:35Z) - Textual Entailment Recognition with Semantic Features from Empirical
Text Representation [60.31047947815282]
A text entails a hypothesis if and only if the true value of the hypothesis follows the text.
In this paper, we propose a novel approach to identifying the textual entailment relationship between text and hypothesis.
We employ an element-wise Manhattan distance vector-based feature that can identify the semantic entailment relationship between the text-hypothesis pair.
arXiv Detail & Related papers (2022-10-18T10:03:51Z) - Suggesting Relevant Questions for a Query Using Statistical Natural
Language Processing Technique [0.0]
Suggesting similar questions for a user query has many applications ranging from reducing search time of users on e-commerce websites, training of employees in companies to holistic learning for students.
The use of Natural Language Processing techniques for suggesting similar questions is prevalent over the existing architecture.
arXiv Detail & Related papers (2022-04-26T04:30:16Z) - Get It Scored Using AutoSAS -- An Automated System for Scoring Short
Answers [63.835172924290326]
We present a fast, scalable, and accurate approach towards automated Short Answer Scoring (SAS)
We propose and explain the design and development of a system for SAS, namely AutoSAS.
AutoSAS shows state-of-the-art performance and achieves better results by over 8% in some of the question prompts.
arXiv Detail & Related papers (2020-12-21T10:47:30Z) - Hierarchical Bi-Directional Self-Attention Networks for Paper Review
Rating Recommendation [81.55533657694016]
We propose a Hierarchical bi-directional self-attention Network framework (HabNet) for paper review rating prediction and recommendation.
Specifically, we leverage the hierarchical structure of the paper reviews with three levels of encoders: sentence encoder (level one), intra-review encoder (level two) and inter-review encoder (level three)
We are able to identify useful predictors to make the final acceptance decision, as well as to help discover the inconsistency between numerical review ratings and text sentiment conveyed by reviewers.
arXiv Detail & Related papers (2020-11-02T08:07:50Z) - A computational model implementing subjectivity with the 'Room Theory'.
The case of detecting Emotion from Text [68.8204255655161]
This work introduces a new method to consider subjectivity and general context dependency in text analysis.
By using similarity measure between words, we are able to extract the relative relevance of the elements in the benchmark.
This method could be applied to all the cases where evaluating subjectivity is relevant to understand the relative value or meaning of a text.
arXiv Detail & Related papers (2020-05-12T21:26:04Z) - Word Embedding-based Text Processing for Comprehensive Summarization and
Distinct Information Extraction [1.552282932199974]
We propose two automated text processing frameworks specifically designed to analyze online reviews.
The first framework is to summarize the reviews dataset by extracting essential sentence.
The second framework is based on a question-answering neural network model trained to extract answers to multiple different questions.
arXiv Detail & Related papers (2020-04-21T02:43:31Z) - ORB: An Open Reading Benchmark for Comprehensive Evaluation of Machine
Reading Comprehension [53.037401638264235]
We present an evaluation server, ORB, that reports performance on seven diverse reading comprehension datasets.
The evaluation server places no restrictions on how models are trained, so it is a suitable test bed for exploring training paradigms and representation learning.
arXiv Detail & Related papers (2019-12-29T07:27:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.