Learning Outcomes, Assessment, and Evaluation in Educational Recommender Systems: A Systematic Review
- URL: http://arxiv.org/abs/2407.09500v1
- Date: Wed, 12 Jun 2024 21:53:46 GMT
- Title: Learning Outcomes, Assessment, and Evaluation in Educational Recommender Systems: A Systematic Review
- Authors: Nursultan Askarbekuly, Ivan Luković,
- Abstract summary: We analyse how learning is measured and optimized in Educational Recommender Systems (ERS)
Rating-based relevance is the most popular target metric, while less than a half of papers optimize learning-based metrics.
Only a third of the papers used outcome-based assessment to measure the pedagogical effect of recommendations.
- Score: 0.0
- License:
- Abstract: In this paper, we analyse how learning is measured and optimized in Educational Recommender Systems (ERS). In particular, we examine the target metrics and evaluation methods used in the existing ERS research, with a particular focus on the pedagogical effect of recommendations. While conducting this systematic literature review (SLR), we identified 1395 potentially relevant papers, then filtered them through the inclusion and exclusion criteria, and finally selected and analyzed 28 relevant papers. Rating-based relevance is the most popular target metric, while less than a half of papers optimize learning-based metrics. Only a third of the papers used outcome-based assessment to measure the pedagogical effect of recommendations, mostly within a formal university course. This indicates a gap in ERS research with respect to assessing the pedagogical effect of recommendations at scale and in informal education settings.
Related papers
- Benchmark for Evaluation and Analysis of Citation Recommendation Models [0.0]
We develop a benchmark specifically designed to analyze and compare citation recommendation models.
This benchmark will evaluate the performance of models on different features of the citation context.
This will enable meaningful comparisons and help identify promising approaches for further research and development in the field.
arXiv Detail & Related papers (2024-12-10T18:01:33Z) - Revisiting Reciprocal Recommender Systems: Metrics, Formulation, and Method [60.364834418531366]
We propose five new evaluation metrics that comprehensively and accurately assess the performance of RRS.
We formulate the RRS from a causal perspective, formulating recommendations as bilateral interventions.
We introduce a reranking strategy to maximize matching outcomes, as measured by the proposed metrics.
arXiv Detail & Related papers (2024-08-19T07:21:02Z) - A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [55.33653554387953]
Pattern Analysis and Machine Intelligence (PAMI) has led to numerous literature reviews aimed at collecting and fragmented information.
This paper presents a thorough analysis of these literature reviews within the PAMI field.
We try to address three core research questions: (1) What are the prevalent structural and statistical characteristics of PAMI literature reviews; (2) What strategies can researchers employ to efficiently navigate the growing corpus of reviews; and (3) What are the advantages and limitations of AI-generated reviews compared to human-authored ones.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - A Comprehensive Survey of Evaluation Techniques for Recommendation
Systems [0.0]
This paper introduces a comprehensive suite of metrics, each tailored to capture a distinct aspect of system performance.
We identify the strengths and limitations of current evaluation practices and highlight the nuanced trade-offs that emerge when optimizing recommendation systems across different metrics.
arXiv Detail & Related papers (2023-12-26T11:57:01Z) - Tag-Aware Document Representation for Research Paper Recommendation [68.8204255655161]
We propose a hybrid approach that leverages deep semantic representation of research papers based on social tags assigned by users.
The proposed model is effective in recommending research papers even when the rating data is very sparse.
arXiv Detail & Related papers (2022-09-08T09:13:07Z) - Evaluating the Predictive Performance of Positive-Unlabelled
Classifiers: a brief critical review and practical recommendations for
improvement [77.34726150561087]
Positive-Unlabelled (PU) learning is a growing area of machine learning.
This paper critically reviews the main PU learning evaluation approaches and the choice of predictive accuracy measures in 51 articles proposing PU classifiers.
arXiv Detail & Related papers (2022-06-06T08:31:49Z) - Measuring "Why" in Recommender Systems: a Comprehensive Survey on the
Evaluation of Explainable Recommendation [87.82664566721917]
This survey is based on more than 100 papers from top-tier conferences like IJCAI, AAAI, TheWebConf, Recsys, UMAP, and IUI.
arXiv Detail & Related papers (2022-02-14T02:58:55Z) - Ranking Scientific Papers Using Preference Learning [48.78161994501516]
We cast it as a paper ranking problem based on peer review texts and reviewer scores.
We introduce a novel, multi-faceted generic evaluation framework for making final decisions based on peer reviews.
arXiv Detail & Related papers (2021-09-02T19:41:47Z) - Academics evaluating academics: a methodology to inform the review
process on top of open citations [1.911678487931003]
We explore whether citation-based metrics, calculated only considering open citation, provide data that can yield insights on how human peer-review of research assessment exercises is conducted.
We propose to use a series of machine learning models to replicate the decisions of the committees of the research assessment exercises.
arXiv Detail & Related papers (2021-06-10T13:09:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.