FATE in MMLA: A Student-Centred Exploration of Fairness, Accountability,
Transparency, and Ethics in Multimodal Learning Analytics
- URL: http://arxiv.org/abs/2402.19071v1
- Date: Thu, 29 Feb 2024 11:52:06 GMT
- Title: FATE in MMLA: A Student-Centred Exploration of Fairness, Accountability,
Transparency, and Ethics in Multimodal Learning Analytics
- Authors: Yueqiao Jin, Vanessa Echeverria, Lixiang Yan, Linxuan Zhao, Riordan
Alfredo, Yi-Shan Tsai, Dragan Ga\v{s}evi\'c, Roberto Martinez-Maldonado
- Abstract summary: This study assessed students' perceived fairness, accountability, transparency, and ethics (FATE) with MMLA visualisations.
Findings highlighted the significance of accurate and comprehensive data representation to ensure visualisation fairness.
Students also emphasise the importance of ethical considerations, highlighting a pressing need for the LA and MMLA community to investigate and address FATE issues actively.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multimodal Learning Analytics (MMLA) integrates novel sensing technologies
and artificial intelligence algorithms, providing opportunities to enhance
student reflection during complex, collaborative learning experiences. Although
recent advancements in MMLA have shown its capability to generate insights into
diverse learning behaviours across various learning settings, little research
has been conducted to evaluate these systems in authentic learning contexts,
particularly regarding students' perceived fairness, accountability,
transparency, and ethics (FATE). Understanding these perceptions is essential
to using MMLA effectively without introducing ethical complications or
negatively affecting how students learn. This study aimed to address this gap
by assessing the FATE of MMLA in an authentic, collaborative learning context.
We conducted semi-structured interviews with 14 undergraduate students who used
MMLA visualisations for post-activity reflection. The findings highlighted the
significance of accurate and comprehensive data representation to ensure
visualisation fairness, the need for different levels of data access to foster
accountability, the imperative of measuring and cultivating transparency with
students, and the necessity of transforming informed consent from dichotomous
to continuous and measurable scales. While students value the benefits of MMLA,
they also emphasise the importance of ethical considerations, highlighting a
pressing need for the LA and MMLA community to investigate and address FATE
issues actively.
Related papers
- Benchmarking Vision Language Model Unlearning via Fictitious Facial Identity Dataset [94.13848736705575]
We introduce Facial Identity Unlearning Benchmark (FIUBench), a novel VLM unlearning benchmark designed to robustly evaluate the effectiveness of unlearning algorithms.
We apply a two-stage evaluation pipeline that is designed to precisely control the sources of information and their exposure levels.
Through the evaluation of four baseline VLM unlearning algorithms within FIUBench, we find that all methods remain limited in their unlearning performance.
arXiv Detail & Related papers (2024-11-05T23:26:10Z) - Exploring Knowledge Tracing in Tutor-Student Dialogues [53.52699766206808]
We present a first attempt at performing knowledge tracing (KT) in tutor-student dialogues.
We propose methods to identify the knowledge components/skills involved in each dialogue turn.
We then apply a range of KT methods on the resulting labeled data to track student knowledge levels over an entire dialogue.
arXiv Detail & Related papers (2024-09-24T22:31:39Z) - Exploring Engagement and Perceived Learning Outcomes in an Immersive Flipped Learning Context [0.195804735329484]
The aim of this study was to explore the benefits and challenges of the immersive flipped learning approach in relation to students' online engagement and perceived learning outcomes.
The study revealed high levels of student engagement and perceived learning outcomes, although it also identified areas needing improvement.
The findings of this study can serve as a valuable resource for educators seeking to design engaging and effective remote learning experiences.
arXiv Detail & Related papers (2024-09-19T11:38:48Z) - I don't trust you (anymore)! -- The effect of students' LLM use on Lecturer-Student-Trust in Higher Education [0.0]
Large Language Models (LLMs) in platforms like Open AI's ChatGPT, has led to their rapid adoption among university students.
This study addresses the research question: How does the use of LLMs by students impact Informational and Procedural Justice, influencing Team Trust and Expected Team Performance?
Our findings indicate that lecturers are less concerned about the fairness of LLM use per se but are more focused on the transparency of student utilization.
arXiv Detail & Related papers (2024-06-21T05:35:57Z) - Enhancing Trust in LLMs: Algorithms for Comparing and Interpreting LLMs [1.0878040851638]
This paper surveys evaluation techniques to enhance the trustworthiness and understanding of Large Language Models (LLMs)
Key evaluation metrics include Perplexity Measurement, NLP metrics (BLEU, ROUGE, METEOR, BERTScore, GLEU, Word Error Rate, Character Error Rate), Zero-Shot and Few-Shot Learning Performance, Transfer Learning Evaluation, Adversarial Testing, and Fairness and Bias Evaluation.
arXiv Detail & Related papers (2024-06-04T03:54:53Z) - Student Data Paradox and Curious Case of Single Student-Tutor Model: Regressive Side Effects of Training LLMs for Personalized Learning [25.90420385230675]
The pursuit of personalized education has led to the integration of Large Language Models (LLMs) in developing intelligent tutoring systems.
Our research uncovers a fundamental challenge in this approach: the Student Data Paradox''
This paradox emerges when LLMs, trained on student data to understand learner behavior, inadvertently compromise their own factual knowledge and reasoning abilities.
arXiv Detail & Related papers (2024-04-23T15:57:55Z) - C-ICL: Contrastive In-context Learning for Information Extraction [54.39470114243744]
c-ICL is a novel few-shot technique that leverages both correct and incorrect sample constructions to create in-context learning demonstrations.
Our experiments on various datasets indicate that c-ICL outperforms previous few-shot in-context learning methods.
arXiv Detail & Related papers (2024-02-17T11:28:08Z) - Rethinking Machine Unlearning for Large Language Models [85.92660644100582]
We explore machine unlearning in the domain of large language models (LLMs)
This initiative aims to eliminate undesirable data influence (e.g., sensitive or illegal information) and the associated model capabilities.
arXiv Detail & Related papers (2024-02-13T20:51:58Z) - Taking the Next Step with Generative Artificial Intelligence: The Transformative Role of Multimodal Large Language Models in Science Education [13.87944568193996]
Multimodal Large Language Models (MLLMs) are capable of processing multimodal data including text, sound, and visual inputs.
This paper explores the transformative role of MLLMs in central aspects of science education by presenting exemplary innovative learning scenarios.
arXiv Detail & Related papers (2024-01-01T18:11:43Z) - MAML is a Noisy Contrastive Learner [72.04430033118426]
Model-agnostic meta-learning (MAML) is one of the most popular and widely-adopted meta-learning algorithms nowadays.
We provide a new perspective to the working mechanism of MAML and discover that: MAML is analogous to a meta-learner using a supervised contrastive objective function.
We propose a simple but effective technique, zeroing trick, to alleviate such interference.
arXiv Detail & Related papers (2021-06-29T12:52:26Z) - Which Mutual-Information Representation Learning Objectives are
Sufficient for Control? [80.2534918595143]
Mutual information provides an appealing formalism for learning representations of data.
This paper formalizes the sufficiency of a state representation for learning and representing the optimal policy.
Surprisingly, we find that two of these objectives can yield insufficient representations given mild and common assumptions on the structure of the MDP.
arXiv Detail & Related papers (2021-06-14T10:12:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.