Large Scale Analysis of Open MOOC Reviews to Support Learners' Course
Selection
- URL: http://arxiv.org/abs/2201.06967v1
- Date: Tue, 11 Jan 2022 10:24:49 GMT
- Title: Large Scale Analysis of Open MOOC Reviews to Support Learners' Course
Selection
- Authors: Manuel J. Gomez, Mario Calder\'on, Victor S\'anchez, F\'elix J.
Garc\'ia Clemente, Jos\'e A. Ruip\'erez-Valiente
- Abstract summary: We analyze 2.4 million reviews (which is the largest MOOC reviews dataset used until now) from five different platforms.
Results show that numeric ratings are clearly biased (63% of them are 5-star ratings)
We expect our study to shed some light on the area and promote a more transparent approach in online education reviews.
- Score: 17.376856503445826
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The recent pandemic has changed the way we see education. It is not
surprising that children and college students are not the only ones using
online education. Millions of adults have signed up for online classes and
courses during last years, and MOOC providers, such as Coursera or edX, are
reporting millions of new users signing up in their platforms. However,
students do face some challenges when choosing courses. Though online review
systems are standard among many verticals, no standardized or fully
decentralized review systems exist in the MOOC ecosystem. In this vein, we
believe that there is an opportunity to leverage available open MOOC reviews in
order to build simpler and more transparent reviewing systems, allowing users
to really identify the best courses out there. Specifically, in our research we
analyze 2.4 million reviews (which is the largest MOOC reviews dataset used
until now) from five different platforms in order to determine the following:
(1) if the numeric ratings provide discriminant information to learners, (2) if
NLP-driven sentiment analysis on textual reviews could provide valuable
information to learners, (3) if we can leverage NLP-driven topic finding
techniques to infer themes that could be important for learners, and (4) if we
can use these models to effectively characterize MOOCs based on the open
reviews. Results show that numeric ratings are clearly biased (63\% of them are
5-star ratings), and the topic modeling reveals some interesting topics related
with course advertisements, the real applicability, or the difficulty of the
different courses. We expect our study to shed some light on the area and
promote a more transparent approach in online education reviews, which are
becoming more and more popular as we enter the post-pandemic era.
Related papers
- Grading Massive Open Online Courses Using Large Language Models [3.0936354370614607]
Massive open online courses (MOOCs) offer free education globally to anyone with a computer and internet access.
Peer grading, often guided by a straightforward rubric, is the method of choice.
We explore the feasibility of using large language models (LLMs) to replace peer grading in MOOCs.
arXiv Detail & Related papers (2024-06-16T23:42:11Z) - Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in Large Language Models [61.45529177682614]
We challenge the prevailing constrained evaluation paradigm for values and opinions in large language models.
We show that models give substantively different answers when not forced.
We distill these findings into recommendations and open challenges in evaluating values and opinions in LLMs.
arXiv Detail & Related papers (2024-02-26T18:00:49Z) - Recommended Guidelines for Effective MOOCs based on a Multiple-Case
Study [3.62672718853196]
Massive Open Online Courseware (MOOCs) appeared in 2008 and grew considerably in the past decade.
This paper analyzes data from 7 successful MOOCs that have attracted over 150,000 students in the past years.
The analysis led to the proposal of a set of guidelines to help instructors in designing more effective MOOCs.
arXiv Detail & Related papers (2022-04-07T12:41:50Z) - Reviews in motion: a large scale, longitudinal study of review
recommendations on Yelp [24.34131115451651]
We focus on "reclassification," wherein a platform changes its filtering decision for a review.
We compile over 12.5M reviews--more than 2M unique--across over 10k businesses.
Our data suggests demographic disparities in reclassifications, with more changes in lower density and low-middle income areas.
arXiv Detail & Related papers (2022-02-18T03:27:53Z) - Polarity in the Classroom: A Case Study Leveraging Peer Sentiment Toward
Scalable Assessment [4.588028371034406]
Accurately grading open-ended assignments in large or massive open online courses (MOOCs) is non-trivial.
In this work, we detail the process by which we create our domain-dependent lexicon and aspect-informed review form.
We end by analyzing validity and discussing conclusions from our corpus of over 6800 peer reviews from nine courses.
arXiv Detail & Related papers (2021-08-02T15:45:11Z) - Exploring Bayesian Deep Learning for Urgent Instructor Intervention Need
in MOOC Forums [58.221459787471254]
Massive Open Online Courses (MOOCs) have become a popular choice for e-learning thanks to their great flexibility.
Due to large numbers of learners and their diverse backgrounds, it is taxing to offer real-time support.
With the large volume of posts and high workloads for MOOC instructors, it is unlikely that the instructors can identify all learners requiring intervention.
This paper explores for the first time Bayesian deep learning on learner-based text posts with two methods: Monte Carlo Dropout and Variational Inference.
arXiv Detail & Related papers (2021-04-26T15:12:13Z) - Linking open-source code commits and MOOC grades to evaluate massive
online open peer review [0.0]
We link data from public code repositories on GitHub and course grades for a large massive-online open course to study the dynamics of massive scale peer review.
We find three distinct repeated peerreview submissions and use these to study how grades change in response to changes in code submissions.
Our exploration also leads to an important observation that massive scale peer-review scores are highly variable, increase, on average, with repeated submissions, and changes in scores are not closely tied to the code changes that form the basis for the re-s.
arXiv Detail & Related papers (2021-04-15T18:27:01Z) - Can We Automate Scientific Reviewing? [89.50052670307434]
We discuss the possibility of using state-of-the-art natural language processing (NLP) models to generate first-pass peer reviews for scientific papers.
We collect a dataset of papers in the machine learning domain, annotate them with different aspects of content covered in each review, and train targeted summarization models that take in papers to generate reviews.
Comprehensive experimental results show that system-generated reviews tend to touch upon more aspects of the paper than human-written reviews.
arXiv Detail & Related papers (2021-01-30T07:16:53Z) - Are Top School Students More Critical of Their Professors? Mining
Comments on RateMyProfessor.com [83.2634062100579]
Student reviews and comments on RateMyProfessor.com reflect realistic learning experiences of students.
Our study proves that student reviews and comments contain crucial information and can serve as essential references for enrollment in courses and universities.
arXiv Detail & Related papers (2021-01-23T20:01:36Z) - Revealing the Hidden Patterns: A Comparative Study on Profiling
Subpopulations of MOOC Students [61.58283466715385]
Massive Open Online Courses (MOOCs) exhibit a remarkable heterogeneity of students.
The advent of complex "big data" from MOOC platforms is a challenging yet rewarding opportunity to deeply understand how students are engaged in MOOCs.
We report on clustering analysis of student activities and comparative analysis on both behavioral patterns and demographical patterns between student subpopulations in the MOOC.
arXiv Detail & Related papers (2020-08-12T10:38:50Z) - Attentional Graph Convolutional Networks for Knowledge Concept
Recommendation in MOOCs in a Heterogeneous View [72.98388321383989]
Massive open online courses ( MOOCs) provide a large-scale and open-access learning opportunity for students to grasp the knowledge.
To attract students' interest, the recommendation system is applied by MOOCs providers to recommend courses to students.
We propose an end-to-end graph neural network-based approach calledAttentionalHeterogeneous Graph Convolutional Deep Knowledge Recommender(ACKRec) for knowledge concept recommendation in MOOCs.
arXiv Detail & Related papers (2020-06-23T18:28:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.