Making Sense of Moodle Log Data
- URL: http://arxiv.org/abs/2106.11071v4
- Date: Tue, 14 Jun 2022 19:55:22 GMT
- Title: Making Sense of Moodle Log Data
- Authors: Daniela Rotelli, Anna Monreale
- Abstract summary: Risk of training machine learning algorithms on biased datasets is always around the corner.
This paper tries to focus on these issues showing some examples of learning log data extracted from Moodle.
- Score: 2.66512000865131
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Research is constantly engaged in finding more productive and powerful ways
to support quality learning and teaching. However, although researchers and
data scientists try to analyse educational data most transparently and
responsibly, the risk of training machine learning algorithms on biased
datasets is always around the corner and may lead to misinterpretations of
student behaviour. This may happen in case of partial understanding of how
learning log data is generated. Moreover, the pursuit of an ever friendlier
user experience moves more and more Learning Management Systems functionality
from the server to the client, but it tends to reduce significant logs as a
side effect. This paper tries to focus on these issues showing some examples of
learning log data extracted from Moodle and some possible misinterpretations
that they hide with the aim to open the debate on data understanding and data
knowledge loss.
Related papers
- Imitation Learning Inputting Image Feature to Each Layer of Neural
Network [1.6574413179773757]
Imitation learning enables robots to learn and replicate human behavior from training data.
Recent advances in machine learning enable end-to-end learning approaches that directly process high-dimensional observation data, such as images.
This paper presents a useful method to address this challenge, which amplifies the influence of data with a relatively low correlation to the output.
arXiv Detail & Related papers (2024-01-18T02:44:18Z) - Robust Machine Learning by Transforming and Augmenting Imperfect
Training Data [6.928276018602774]
This thesis explores several data sensitivities of modern machine learning.
We first discuss how to prevent ML from codifying prior human discrimination measured in the training data.
We then discuss the problem of learning from data containing spurious features, which provide predictive fidelity during training but are unreliable upon deployment.
arXiv Detail & Related papers (2023-12-19T20:49:28Z) - GraphGuard: Detecting and Counteracting Training Data Misuse in Graph
Neural Networks [69.97213941893351]
The emergence of Graph Neural Networks (GNNs) in graph data analysis has raised critical concerns about data misuse during model training.
Existing methodologies address either data misuse detection or mitigation, and are primarily designed for local GNN models.
This paper introduces a pioneering approach called GraphGuard, to tackle these challenges.
arXiv Detail & Related papers (2023-12-13T02:59:37Z) - Privacy-Preserving Graph Machine Learning from Data to Computation: A
Survey [67.7834898542701]
We focus on reviewing privacy-preserving techniques of graph machine learning.
We first review methods for generating privacy-preserving graph data.
Then we describe methods for transmitting privacy-preserved information.
arXiv Detail & Related papers (2023-07-10T04:30:23Z) - Reinforcement Learning from Passive Data via Latent Intentions [86.4969514480008]
We show that passive data can still be used to learn features that accelerate downstream RL.
Our approach learns from passive data by modeling intentions.
Our experiments demonstrate the ability to learn from many forms of passive data, including cross-embodiment video data and YouTube videos.
arXiv Detail & Related papers (2023-04-10T17:59:05Z) - Learning from Few Examples: A Summary of Approaches to Few-Shot Learning [3.6930948691311016]
Few-Shot Learning refers to the problem of learning the underlying pattern in the data just from a few training samples.
Deep learning solutions suffer from data hunger and extensively high computation time and resources.
Few-shot learning that could drastically reduce the turnaround time of building machine learning applications emerges as a low-cost solution.
arXiv Detail & Related papers (2022-03-07T23:15:21Z) - Online Continual Learning with Natural Distribution Shifts: An Empirical
Study with Visual Data [101.6195176510611]
"Online" continual learning enables evaluating both information retention and online learning efficacy.
In online continual learning, each incoming small batch of data is first used for testing and then added to the training set, making the problem truly online.
We introduce a new benchmark for online continual visual learning that exhibits large scale and natural distribution shifts.
arXiv Detail & Related papers (2021-08-20T06:17:20Z) - A Reflection on Learning from Data: Epistemology Issues and Limitations [1.8047694351309205]
This paper reflects on some issues and some limitations of the knowledge discovered in data.
The paper sheds some light on the shortcomings of using generic mathematical theories to describe the process.
It further highlights the need for theories specialized in learning from data.
arXiv Detail & Related papers (2021-07-28T11:05:34Z) - Laplacian Denoising Autoencoder [114.21219514831343]
We propose to learn data representations with a novel type of denoising autoencoder.
The noisy input data is generated by corrupting latent clean data in the gradient domain.
Experiments on several visual benchmarks demonstrate that better representations can be learned with the proposed approach.
arXiv Detail & Related papers (2020-03-30T16:52:39Z) - Mining Implicit Entity Preference from User-Item Interaction Data for
Knowledge Graph Completion via Adversarial Learning [82.46332224556257]
We propose a novel adversarial learning approach by leveraging user interaction data for the Knowledge Graph Completion task.
Our generator is isolated from user interaction data, and serves to improve the performance of the discriminator.
To discover implicit entity preference of users, we design an elaborate collaborative learning algorithms based on graph neural networks.
arXiv Detail & Related papers (2020-03-28T05:47:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.