Deep Learning for Opinion Mining and Topic Classification of Course
Reviews
- URL: http://arxiv.org/abs/2304.03394v2
- Date: Fri, 16 Jun 2023 14:15:10 GMT
- Title: Deep Learning for Opinion Mining and Topic Classification of Course
Reviews
- Authors: Anna Koufakou
- Abstract summary: We collected and pre-processed a large number of course reviews publicly available online.
We applied machine learning techniques with the goal to gain insight into student sentiments and topics.
For sentiment polarity, the top model was RoBERTa with 95.5% accuracy and 84.7% F1-macro, while for topic classification, an SVM was the top with 79.8% accuracy and 80.6% F1-macro.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Student opinions for a course are important to educators and administrators,
regardless of the type of the course or the institution. Reading and manually
analyzing open-ended feedback becomes infeasible for massive volumes of
comments at institution level or online forums. In this paper, we collected and
pre-processed a large number of course reviews publicly available online. We
applied machine learning techniques with the goal to gain insight into student
sentiments and topics. Specifically, we utilized current Natural Language
Processing (NLP) techniques, such as word embeddings and deep neural networks,
and state-of-the-art BERT (Bidirectional Encoder Representations from
Transformers), RoBERTa (Robustly optimized BERT approach) and XLNet
(Generalized Auto-regression Pre-training). We performed extensive
experimentation to compare these techniques versus traditional approaches. This
comparative study demonstrates how to apply modern machine learning approaches
for sentiment polarity extraction and topic-based classification utilizing
course feedback. For sentiment polarity, the top model was RoBERTa with 95.5%
accuracy and 84.7% F1-macro, while for topic classification, an SVM (Support
Vector Machine) was the top classifier with 79.8% accuracy and 80.6% F1-macro.
We also provided an in-depth exploration of the effect of certain
hyperparameters on the model performance and discussed our observations. These
findings can be used by institutions and course providers as a guide for
analyzing their own course feedback using NLP models towards self-evaluation
and improvement.
Related papers
- BERT-Based Approach for Automating Course Articulation Matrix Construction with Explainable AI [1.4214002697449326]
Course Outcome (CO) and Program Outcome (PO)/Program-Specific Outcome (PSO) alignment is a crucial task for ensuring curriculum coherence and assessing educational effectiveness.
This work demonstrates the potential of utilizing transfer learning with BERT-based models for the automated generation of Course Articulation Matrix (CAM)
Our system achieves accuracy, precision, recall, and F1-score values of 98.66%, 98.67%, 98.66%, and 98.66%, respectively.
arXiv Detail & Related papers (2024-11-21T16:02:39Z) - Automated Assessment of Encouragement and Warmth in Classrooms Leveraging Multimodal Emotional Features and ChatGPT [7.273857543125784]
Our work explores a multimodal approach to automatically estimating encouragement and warmth in classrooms.
We employed facial and speech emotion recognition with sentiment analysis to extract interpretable features from video, audio, and transcript data.
We demonstrated our approach on the GTI dataset, comprising 367 16-minute video segments from 92 authentic lesson recordings.
arXiv Detail & Related papers (2024-04-01T16:58:09Z) - Machine Unlearning of Pre-trained Large Language Models [17.40601262379265]
This study investigates the concept of the right to be forgotten' within the context of large language models (LLMs)
We explore machine unlearning as a pivotal solution, with a focus on pre-trained models.
arXiv Detail & Related papers (2024-02-23T07:43:26Z) - Generative Judge for Evaluating Alignment [84.09815387884753]
We propose a generative judge with 13B parameters, Auto-J, designed to address these challenges.
Our model is trained on user queries and LLM-generated responses under massive real-world scenarios.
Experimentally, Auto-J outperforms a series of strong competitors, including both open-source and closed-source models.
arXiv Detail & Related papers (2023-10-09T07:27:15Z) - Utilizing Natural Language Processing for Automated Assessment of
Classroom Discussion [0.7087237546722617]
In this work, we experimented with various modern natural language processing (NLP) techniques to automatically generate rubric scores for individual dimensions of classroom text discussion quality.
Despite the limited amount of data, our work shows encouraging results in some of the rubrics while suggesting that there is room for improvement in the others.
arXiv Detail & Related papers (2023-06-21T16:45:24Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Better Language Model with Hypernym Class Prediction [101.8517004687825]
Class-based language models (LMs) have been long devised to address context sparsity in $n$-gram LMs.
In this study, we revisit this approach in the context of neural LMs.
arXiv Detail & Related papers (2022-03-21T01:16:44Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Hierarchical Bi-Directional Self-Attention Networks for Paper Review
Rating Recommendation [81.55533657694016]
We propose a Hierarchical bi-directional self-attention Network framework (HabNet) for paper review rating prediction and recommendation.
Specifically, we leverage the hierarchical structure of the paper reviews with three levels of encoders: sentence encoder (level one), intra-review encoder (level two) and inter-review encoder (level three)
We are able to identify useful predictors to make the final acceptance decision, as well as to help discover the inconsistency between numerical review ratings and text sentiment conveyed by reviewers.
arXiv Detail & Related papers (2020-11-02T08:07:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.