Combining Automatic Coding and Instructor Input to Generate ENA
Visualizations for Asynchronous Online Discussion
- URL: http://arxiv.org/abs/2308.13549v1
- Date: Tue, 22 Aug 2023 20:42:18 GMT
- Title: Combining Automatic Coding and Instructor Input to Generate ENA
Visualizations for Asynchronous Online Discussion
- Authors: Marcia Moraes, Sadaf Ghaffari, Yanye Luther, and James Folkestad
- Abstract summary: We present an approach that uses Latent Dirichlet Analysis (LDA) and the instructor's keywords to automatically extract codes from a relatively small dataset.
We use the generated codes to build an Epistemic Network Analysis (ENA) model and compare this model with a previous ENA model built by human coders.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Asynchronous online discussions are a common fundamental tool to facilitate
social interaction in hybrid and online courses. However, instructors lack the
tools to accomplish the overwhelming task of evaluating asynchronous online
discussion activities. In this paper we present an approach that uses Latent
Dirichlet Analysis (LDA) and the instructor's keywords to automatically extract
codes from a relatively small dataset. We use the generated codes to build an
Epistemic Network Analysis (ENA) model and compare this model with a previous
ENA model built by human coders. The results show that there is no statistical
difference between the two models. We present an analysis of these models and
discuss the potential use of ENA as a visualization to help instructors
evaluating asynchronous online discussions.
Related papers
- OnDiscuss: An Epistemic Network Analysis Learning Analytics Visualization Tool for Evaluating Asynchronous Online Discussions [0.49998148477760973]
OnDiscuss is a learning analytics visualization tool for instructors that utilize text mining algorithms and Epistemic Network Analysis (ENA)
Text mining is used to generate an initial codebook for the instructor as well as automatically code the data.
This tool allows instructors to edit their codebook and then dynamically view the resulting ENA networks for the entire class and individual students.
arXiv Detail & Related papers (2024-08-19T21:23:11Z) - RAVEN: In-Context Learning with Retrieval-Augmented Encoder-Decoder Language Models [57.12888828853409]
RAVEN is a model that combines retrieval-augmented masked language modeling and prefix language modeling.
Fusion-in-Context Learning enables the model to leverage more in-context examples without requiring additional training.
Our work underscores the potential of retrieval-augmented encoder-decoder language models for in-context learning.
arXiv Detail & Related papers (2023-08-15T17:59:18Z) - DinoSR: Self-Distillation and Online Clustering for Self-supervised
Speech Representation Learning [140.96990096377127]
We introduce self-distillation and online clustering for self-supervised speech representation learning (DinoSR)
DinoSR first extracts contextualized embeddings from the input audio with a teacher network, then runs an online clustering system on the embeddings to yield a machine-discovered phone inventory, and finally uses the discretized tokens to guide a student network.
We show that DinoSR surpasses previous state-of-the-art performance in several downstream tasks, and provide a detailed analysis of the model and the learned discrete units.
arXiv Detail & Related papers (2023-05-17T07:23:46Z) - EmbedDistill: A Geometric Knowledge Distillation for Information
Retrieval [83.79667141681418]
Large neural models (such as Transformers) achieve state-of-the-art performance for information retrieval (IR)
We propose a novel distillation approach that leverages the relative geometry among queries and documents learned by the large teacher model.
We show that our approach successfully distills from both dual-encoder (DE) and cross-encoder (CE) teacher models to 1/10th size asymmetric students that can retain 95-97% of the teacher performance.
arXiv Detail & Related papers (2023-01-27T22:04:37Z) - Leveraging Pre-Trained Language Models to Streamline Natural Language
Interaction for Self-Tracking [25.28975864365579]
We propose a novel NLP task for self-tracking that extracts close- and open-ended information from a retrospective activity log.
The framework augments the prompt using synthetic samples to transform the task into 10-shot learning, to address a cold-start problem in bootstrapping a new tracking topic.
arXiv Detail & Related papers (2022-05-31T01:58:04Z) - Learning Dual Dynamic Representations on Time-Sliced User-Item
Interaction Graphs for Sequential Recommendation [62.30552176649873]
We devise a novel Dynamic Representation Learning model for Sequential Recommendation (DRL-SRe)
To better model the user-item interactions for characterizing the dynamics from both sides, the proposed model builds a global user-item interaction graph for each time slice.
To enable the model to capture fine-grained temporal information, we propose an auxiliary temporal prediction task over consecutive time slices.
arXiv Detail & Related papers (2021-09-24T07:44:27Z) - Layer-wise Analysis of a Self-supervised Speech Representation Model [26.727775920272205]
Self-supervised learning approaches have been successful for pre-training speech representation models.
Not much has been studied about the type or extent of information encoded in the pre-trained representations themselves.
arXiv Detail & Related papers (2021-07-10T02:13:25Z) - Bootstrapping Relation Extractors using Syntactic Search by Examples [47.11932446745022]
We propose a process for bootstrapping training datasets which can be performed quickly by non-NLP-experts.
We take advantage of search engines over syntactic-graphs which expose a friendly by-example syntax.
We show that the resulting models are competitive with models trained on manually annotated data and on data obtained from distant supervision.
arXiv Detail & Related papers (2021-02-09T18:17:59Z) - Human Trajectory Forecasting in Crowds: A Deep Learning Perspective [89.4600982169]
We present an in-depth analysis of existing deep learning-based methods for modelling social interactions.
We propose two knowledge-based data-driven methods to effectively capture these social interactions.
We develop a large scale interaction-centric benchmark TrajNet++, a significant yet missing component in the field of human trajectory forecasting.
arXiv Detail & Related papers (2020-07-07T17:19:56Z) - Forecasting Sequential Data using Consistent Koopman Autoencoders [52.209416711500005]
A new class of physics-based methods related to Koopman theory has been introduced, offering an alternative for processing nonlinear dynamical systems.
We propose a novel Consistent Koopman Autoencoder model which, unlike the majority of existing work, leverages the forward and backward dynamics.
Key to our approach is a new analysis which explores the interplay between consistent dynamics and their associated Koopman operators.
arXiv Detail & Related papers (2020-03-04T18:24:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.