Learning Outcomes, Assessment, and Evaluation in Educational Recommender Systems: A Systematic Review
- URL: http://arxiv.org/abs/2407.09500v1
- Date: Wed, 12 Jun 2024 21:53:46 GMT
- Title: Learning Outcomes, Assessment, and Evaluation in Educational Recommender Systems: A Systematic Review
- Authors: Nursultan Askarbekuly, Ivan Luković,
- Abstract summary: We analyse how learning is measured and optimized in Educational Recommender Systems (ERS)
Rating-based relevance is the most popular target metric, while less than a half of papers optimize learning-based metrics.
Only a third of the papers used outcome-based assessment to measure the pedagogical effect of recommendations.
- Score: 0.0
- License:
- Abstract: In this paper, we analyse how learning is measured and optimized in Educational Recommender Systems (ERS). In particular, we examine the target metrics and evaluation methods used in the existing ERS research, with a particular focus on the pedagogical effect of recommendations. While conducting this systematic literature review (SLR), we identified 1395 potentially relevant papers, then filtered them through the inclusion and exclusion criteria, and finally selected and analyzed 28 relevant papers. Rating-based relevance is the most popular target metric, while less than a half of papers optimize learning-based metrics. Only a third of the papers used outcome-based assessment to measure the pedagogical effect of recommendations, mostly within a formal university course. This indicates a gap in ERS research with respect to assessing the pedagogical effect of recommendations at scale and in informal education settings.
Related papers
- Revisiting Reciprocal Recommender Systems: Metrics, Formulation, and Method [60.364834418531366]
We propose five new evaluation metrics that comprehensively and accurately assess the performance of RRS.
We formulate the RRS from a causal perspective, formulating recommendations as bilateral interventions.
We introduce a reranking strategy to maximize matching outcomes, as measured by the proposed metrics.
arXiv Detail & Related papers (2024-08-19T07:21:02Z) - A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [58.6354685593418]
This paper proposes several article-level, field-normalized, and large language model-empowered bibliometric indicators to evaluate reviews.
The newly emerging AI-generated literature reviews are also appraised.
This work offers insights into the current challenges of literature reviews and envisions future directions for their development.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - A Comprehensive Survey of Evaluation Techniques for Recommendation
Systems [0.0]
This paper introduces a comprehensive suite of metrics, each tailored to capture a distinct aspect of system performance.
We identify the strengths and limitations of current evaluation practices and highlight the nuanced trade-offs that emerge when optimizing recommendation systems across different metrics.
arXiv Detail & Related papers (2023-12-26T11:57:01Z) - Impression-Aware Recommender Systems [57.38537491535016]
Novel data sources bring new opportunities to improve the quality of recommender systems.
Researchers may use impressions to refine user preferences and overcome the current limitations in recommender systems research.
We present a systematic literature review on recommender systems using impressions.
arXiv Detail & Related papers (2023-08-15T16:16:02Z) - Evaluating the Predictive Performance of Positive-Unlabelled
Classifiers: a brief critical review and practical recommendations for
improvement [77.34726150561087]
Positive-Unlabelled (PU) learning is a growing area of machine learning.
This paper critically reviews the main PU learning evaluation approaches and the choice of predictive accuracy measures in 51 articles proposing PU classifiers.
arXiv Detail & Related papers (2022-06-06T08:31:49Z) - Measuring "Why" in Recommender Systems: a Comprehensive Survey on the
Evaluation of Explainable Recommendation [87.82664566721917]
This survey is based on more than 100 papers from top-tier conferences like IJCAI, AAAI, TheWebConf, Recsys, UMAP, and IUI.
arXiv Detail & Related papers (2022-02-14T02:58:55Z) - Ranking Scientific Papers Using Preference Learning [48.78161994501516]
We cast it as a paper ranking problem based on peer review texts and reviewer scores.
We introduce a novel, multi-faceted generic evaluation framework for making final decisions based on peer reviews.
arXiv Detail & Related papers (2021-09-02T19:41:47Z) - Academics evaluating academics: a methodology to inform the review
process on top of open citations [1.911678487931003]
We explore whether citation-based metrics, calculated only considering open citation, provide data that can yield insights on how human peer-review of research assessment exercises is conducted.
We propose to use a series of machine learning models to replicate the decisions of the committees of the research assessment exercises.
arXiv Detail & Related papers (2021-06-10T13:09:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.