Have Learning Analytics Dashboards Lived Up to the Hype? A Systematic
Review of Impact on Students' Achievement, Motivation, Participation and
Attitude
- URL: http://arxiv.org/abs/2312.15042v1
- Date: Fri, 22 Dec 2023 20:12:52 GMT
- Title: Have Learning Analytics Dashboards Lived Up to the Hype? A Systematic
Review of Impact on Students' Achievement, Motivation, Participation and
Attitude
- Authors: Rogers Kaliisa, Kamila Misiejuk, Sonsoles L\'opez-Pernas, Mohammad
Khalil, Mohammed Saqr
- Abstract summary: There is no evidence to support the conclusion that learning analytics dashboards (LADs) have lived up to the promise of improving academic achievement.
LADs showed a relatively substantial impact on student participation.
To advance the research line for LADs, researchers should use rigorous assessment methods and establish clear standards for evaluating learning constructs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While learning analytics dashboards (LADs) are the most common form of LA
intervention, there is limited evidence regarding their impact on students
learning outcomes. This systematic review synthesizes the findings of 38
research studies to investigate the impact of LADs on students' learning
outcomes, encompassing achievement, participation, motivation, and attitudes.
As we currently stand, there is no evidence to support the conclusion that LADs
have lived up to the promise of improving academic achievement. Most studies
reported negligible or small effects, with limited evidence from well-powered
controlled experiments. Many studies merely compared users and non-users of
LADs, confounding the dashboard effect with student engagement levels.
Similarly, the impact of LADs on motivation and attitudes appeared modest, with
only a few exceptions demonstrating significant effects. Small sample sizes in
these studies highlight the need for larger-scale investigations to validate
these findings. Notably, LADs showed a relatively substantial impact on student
participation. Several studies reported medium to large effect sizes,
suggesting that LADs can promote engagement and interaction in online learning
environments. However, methodological shortcomings, such as reliance on
traditional evaluation methods, self-selection bias, the assumption that access
equates to usage, and a lack of standardized assessment tools, emerged as
recurring issues. To advance the research line for LADs, researchers should use
rigorous assessment methods and establish clear standards for evaluating
learning constructs. Such efforts will advance our understanding of the
potential of LADs to enhance learning outcomes and provide valuable insights
for educators and researchers alike.
Related papers
- PICA: A Data-driven Synthesis of Peer Instruction and Continuous Assessment [0.0]
The work herein combines PI and CA in a deliberate and novel manner to pair students together for a PI session in which they collaborate on a CA task.
The motivation for this data-driven collaborative learning is to improve student learning, communication, and engagement.
arXiv Detail & Related papers (2024-07-24T20:50:32Z) - Bayesian Causal Forests for Longitudinal Data: Assessing the Impact of Part-Time Work on Growth in High School Mathematics Achievement [0.0]
We introduce a longitudinal extension of Bayesian Causal Forests.
This model allows for the flexible identification of both individual growth in mathematical ability and the effects of participation in part-time work.
Results reveal the negative impact of part time work for most students, but hint at potential benefits for those students with an initially low sense of school belonging.
arXiv Detail & Related papers (2024-07-16T17:18:33Z) - Evaluating Interventional Reasoning Capabilities of Large Language Models [58.52919374786108]
Large language models (LLMs) can estimate causal effects under interventions on different parts of a system.
We conduct empirical analyses to evaluate whether LLMs can accurately update their knowledge of a data-generating process in response to an intervention.
We create benchmarks that span diverse causal graphs (e.g., confounding, mediation) and variable types, and enable a study of intervention-based reasoning.
arXiv Detail & Related papers (2024-04-08T14:15:56Z) - Deep Active Learning: A Reality Check [30.19086526296748]
No single-model method decisively outperforms entropy-based active learning.
We extend our evaluation to other tasks, exploring the active learning effectiveness in combination with semi-supervised learning.
arXiv Detail & Related papers (2024-03-21T19:28:17Z) - Evaluating and Optimizing Educational Content with Large Language Model Judgments [52.33701672559594]
We use Language Models (LMs) as educational experts to assess the impact of various instructions on learning outcomes.
We introduce an instruction optimization approach in which one LM generates instructional materials using the judgments of another LM as a reward function.
Human teachers' evaluations of these LM-generated worksheets show a significant alignment between the LM judgments and human teacher preferences.
arXiv Detail & Related papers (2024-03-05T09:09:15Z) - Causal Discovery and Counterfactual Explanations for Personalized
Student Learning [0.0]
The study's main contributions include using causal discovery to identify causal predictors of student performance.
The results reveal the identified causal relationships, such as the influence of earlier test grades and mathematical ability on final student performance.
A major challenge remains, which is the real-time implementation and validation of counterfactual recommendations.
arXiv Detail & Related papers (2023-09-18T10:32:47Z) - Sensitivity, Performance, Robustness: Deconstructing the Effect of
Sociodemographic Prompting [64.80538055623842]
sociodemographic prompting is a technique that steers the output of prompt-based models towards answers that humans with specific sociodemographic profiles would give.
We show that sociodemographic information affects model predictions and can be beneficial for improving zero-shot learning in subjective NLP tasks.
arXiv Detail & Related papers (2023-09-13T15:42:06Z) - Susceptibility to Influence of Large Language Models [5.931099001882958]
Two studies tested the hypothesis that a Large Language Model (LLM) can be used to model psychological change following exposure to influential input.
The first study tested a generic mode of influence - the Illusory Truth Effect (ITE)
The second study concerns a specific mode of influence - populist framing of news to increase its persuasion and political mobilization.
arXiv Detail & Related papers (2023-03-10T16:53:30Z) - The Challenges of Assessing and Evaluating the Students at Distance [77.34726150561087]
The COVID-19 pandemic has caused a strong effect on higher education institutions with the closure of classroom teaching activities.
This short essay aims to explore the challenges posed to Portuguese higher education institutions and to analyze the challenges posed to evaluation models.
arXiv Detail & Related papers (2021-01-30T13:13:45Z) - Social Engagement versus Learning Engagement -- An Exploratory Study of
FutureLearn Learners [61.58283466715385]
Massive Open Online Courses (MOOCs) continue to see increasing enrolment, but only a small percent of enrolees completes the MOOCs.
This study is particularly concerned with how learners interact with peers, along with their study progression in MOOCs.
The study was conducted on the less explored FutureLearn platform, which employs a social constructivist approach and promotes collaborative learning.
arXiv Detail & Related papers (2020-08-11T16:09:10Z) - Sentiment Analysis Based on Deep Learning: A Comparative Study [69.09570726777817]
The study of public opinion can provide us with valuable information.
The efficiency and accuracy of sentiment analysis is being hindered by the challenges encountered in natural language processing.
This paper reviews the latest studies that have employed deep learning to solve sentiment analysis problems.
arXiv Detail & Related papers (2020-06-05T16:28:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.