Understanding Self-Directed Learning in an Online Laboratory
- URL: http://arxiv.org/abs/2206.02742v1
- Date: Mon, 6 Jun 2022 16:55:50 GMT
- Title: Understanding Self-Directed Learning in an Online Laboratory
- Authors: Sungeun An, Spencer Rugaber, Jennifer Hammock, Ashok K. Goel
- Abstract summary: In this study, we could observe only the modeling behaviors and outcomes; the learning goals and outcomes were unknown.
We used machine learning techniques to analyze the modeling behaviors of 315 learners and 822 conceptual models they generated.
- Score: 6.193838300896449
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We described a study on the use of an online laboratory for self-directed
learning by constructing and simulating conceptual models of ecological
systems. In this study, we could observe only the modeling behaviors and
outcomes; the learning goals and outcomes were unknown. We used machine
learning techniques to analyze the modeling behaviors of 315 learners and 822
conceptual models they generated. We derive three main conclusions from the
results. First, learners manifest three types of modeling behaviors:
observation (simulation focused), construction (construction focused), and full
exploration (model construction, evaluation and revision). Second, while
observation was the most common behavior among all learners, construction
without evaluation was more common for less engaged learners and full
exploration occurred mostly for more engaged learners. Third, learners who
explored the full cycle of model construction, evaluation and revision
generated models of higher quality. These modeling behaviors provide insights
into self-directed learning at large.
Related papers
- Deep Learning Through A Telescoping Lens: A Simple Model Provides Empirical Insights On Grokking, Gradient Boosting & Beyond [61.18736646013446]
In pursuit of a deeper understanding of its surprising behaviors, we investigate the utility of a simple yet accurate model of a trained neural network.
Across three case studies, we illustrate how it can be applied to derive new empirical insights on a diverse range of prominent phenomena.
arXiv Detail & Related papers (2024-10-31T22:54:34Z) - Learning-based Models for Vulnerability Detection: An Extensive Study [3.1317409221921144]
We extensively and comprehensively investigate two types of state-of-the-art learning-based approaches.
We experimentally demonstrate the priority of sequence-based models and the limited abilities of both graph-based models.
arXiv Detail & Related papers (2024-08-14T13:01:30Z) - Deep Generative Models in Robotics: A Survey on Learning from Multimodal Demonstrations [52.11801730860999]
In recent years, the robot learning community has shown increasing interest in using deep generative models to capture the complexity of large datasets.
We present the different types of models that the community has explored, such as energy-based models, diffusion models, action value maps, or generative adversarial networks.
We also present the different types of applications in which deep generative models have been used, from grasp generation to trajectory generation or cost learning.
arXiv Detail & Related papers (2024-08-08T11:34:31Z) - iNNspector: Visual, Interactive Deep Model Debugging [8.997568393450768]
We propose a conceptual framework structuring the data space of deep learning experiments.
Our framework captures design dimensions and proposes mechanisms to make this data explorable and tractable.
We present the iNNspector system, which enables tracking of deep learning experiments and provides interactive visualizations of the data.
arXiv Detail & Related papers (2024-07-25T12:48:41Z) - RIGL: A Unified Reciprocal Approach for Tracing the Independent and Group Learning Processes [22.379764500005503]
We propose RIGL, a unified Reciprocal model to trace knowledge states at both the individual and group levels.
In this paper, we introduce a time frame-aware reciprocal embedding module to concurrently model both student and group response interactions.
We design a relation-guided temporal attentive network, comprised of dynamic graph modeling coupled with a temporal self-attention mechanism.
arXiv Detail & Related papers (2024-06-18T10:16:18Z) - Enhancing Generative Class Incremental Learning Performance with Model Forgetting Approach [50.36650300087987]
This study presents a novel approach to Generative Class Incremental Learning (GCIL) by introducing the forgetting mechanism.
We have found that integrating the forgetting mechanisms significantly enhances the models' performance in acquiring new knowledge.
arXiv Detail & Related papers (2024-03-27T05:10:38Z) - Learning by Self-Explaining [23.420673675343266]
We introduce a novel workflow in the context of image classification, termed Learning by Self-Explaining (LSX)
LSX utilizes aspects of self-refining AI and human-guided explanatory machine learning.
Our results indicate improvements via Learning by Self-Explaining on several levels.
arXiv Detail & Related papers (2023-09-15T13:41:57Z) - Predicting the long-term collective behaviour of fish pairs with deep learning [52.83927369492564]
This study introduces a deep learning model to assess social interactions in the fish species Hemigrammus rhodostomus.
We compare the results of our deep learning approach to experiments and to the results of a state-of-the-art analytical model.
We demonstrate that machine learning models social interactions can directly compete with their analytical counterparts in subtle experimental observables.
arXiv Detail & Related papers (2023-02-14T05:25:03Z) - Minimal Value-Equivalent Partial Models for Scalable and Robust Planning
in Lifelong Reinforcement Learning [56.50123642237106]
Common practice in model-based reinforcement learning is to learn models that model every aspect of the agent's environment.
We argue that such models are not particularly well-suited for performing scalable and robust planning in lifelong reinforcement learning scenarios.
We propose new kinds of models that only model the relevant aspects of the environment, which we call "minimal value-minimal partial models"
arXiv Detail & Related papers (2023-01-24T16:40:01Z) - Predicting student performance using sequence classification with
time-based windows [1.5836913530330787]
We show that accurate predictive models can be built based on sequential patterns derived from students' behavioral data.
We present a methodology for capturing temporal aspects in behavioral data and analyze its influence on the predictive performance of the models.
The results of our improved sequence classification technique are capable of predicting student performance with high levels of accuracy, reaching 90 percent for course-specific models.
arXiv Detail & Related papers (2022-08-16T13:46:39Z) - Learning abstract structure for drawing by efficient motor program
induction [52.13961975752941]
We develop a naturalistic drawing task to study how humans rapidly acquire structured prior knowledge.
We show that people spontaneously learn abstract drawing procedures that support generalization.
We propose a model of how learners can discover these reusable drawing programs.
arXiv Detail & Related papers (2020-08-08T13:31:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.