BloomNet: A Robust Transformer based model for Bloom's Learning Outcome
Classification
- URL: http://arxiv.org/abs/2108.07249v1
- Date: Mon, 16 Aug 2021 17:31:44 GMT
- Title: BloomNet: A Robust Transformer based model for Bloom's Learning Outcome
Classification
- Authors: Abdul Waheed, Muskan Goyal, Nimisha Mittal, Deepak Gupta, Ashish
Khanna, Moolchand Sharma
- Abstract summary: Bloom taxonomy is a paradigm for categorizing educational learning objectives into three learning levels: cognitive, affective, and psychomotor.
We propose a transformer-based model named BloomNet that captures linguistic as well semantic information to classify the course learning outcomes (CLOs)
- Score: 2.8014992300800103
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bloom taxonomy is a common paradigm for categorizing educational learning
objectives into three learning levels: cognitive, affective, and psychomotor.
For the optimization of educational programs, it is crucial to design course
learning outcomes (CLOs) according to the different cognitive levels of Bloom
Taxonomy. Usually, administrators of the institutions manually complete the
tedious work of mapping CLOs and examination questions to Bloom taxonomy
levels. To address this issue, we propose a transformer-based model named
BloomNet that captures linguistic as well semantic information to classify the
course learning outcomes (CLOs). We compare BloomNet with a diverse set of
basic as well as strong baselines and we observe that our model performs better
than all the experimented baselines. Further, we also test the generalization
capability of BloomNet by evaluating it on different distributions which our
model does not encounter during training and we observe that our model is less
susceptible to distribution shift compared to the other considered models. We
support our findings by performing extensive result analysis. In ablation study
we observe that on explicitly encapsulating the linguistic information along
with semantic information improves the model on IID (independent and
identically distributed) performance as well as OOD (out-of-distribution)
generalization capability.
Related papers
- Tree-based Models for Vertical Federated Learning: A Survey [71.7819045050963]
Tree-based models have achieved great success in a wide range of real-world applications due to their effectiveness, robustness, and interpretability.
We conduct a series of experiments to provide empirical observations on the differences and advances of different types of tree-based models.
arXiv Detail & Related papers (2025-04-03T05:16:09Z) - SPaR: Self-Play with Tree-Search Refinement to Improve Instruction-Following in Large Language Models [88.29990536278167]
We introduce SPaR, a self-play framework integrating tree-search self-refinement to yield valid and comparable preference pairs.
Our experiments show that a LLaMA3-8B model, trained over three iterations guided by SPaR, surpasses GPT-4-Turbo on the IFEval benchmark without losing general capabilities.
arXiv Detail & Related papers (2024-12-16T09:47:43Z) - Corpus Considerations for Annotator Modeling and Scaling [9.263562546969695]
We show that the commonly used user token model consistently outperforms more complex models.
Our findings shed light on the relationship between corpus statistics and annotator modeling performance.
arXiv Detail & Related papers (2024-04-02T22:27:24Z) - A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - Globally Interpretable Graph Learning via Distribution Matching [12.885580925389352]
We aim to answer an important question that is not yet well studied: how to provide a global interpretation for the graph learning procedure?
We formulate this problem as globally interpretable graph learning, which targets on distilling high-level and human-intelligible patterns that dominate the learning procedure.
We propose a novel model fidelity metric, tailored for evaluating the fidelity of the resulting model trained on interpretations.
arXiv Detail & Related papers (2023-06-18T00:50:36Z) - Entailment as Robust Self-Learner [14.86757876218415]
We design a prompting strategy that formulates a number of different NLU tasks as contextual entailment.
We propose the Simple Pseudo-Label Editing (SimPLE) algorithm for better pseudo-labeling quality in self-training.
arXiv Detail & Related papers (2023-05-26T18:41:23Z) - On the Compositional Generalization Gap of In-Context Learning [73.09193595292233]
We look at the gap between the in-distribution (ID) and out-of-distribution (OOD) performance of such models in semantic parsing tasks with in-context learning.
We evaluate four model families, OPT, BLOOM, CodeGen and Codex on three semantic parsing datasets.
arXiv Detail & Related papers (2022-11-15T19:56:37Z) - Large Language Models with Controllable Working Memory [64.71038763708161]
Large language models (LLMs) have led to a series of breakthroughs in natural language processing (NLP)
What further sets these models apart is the massive amounts of world knowledge they internalize during pretraining.
How the model's world knowledge interacts with the factual information presented in the context remains under explored.
arXiv Detail & Related papers (2022-11-09T18:58:29Z) - MonaCoBERT: Monotonic attention based ConvBERT for Knowledge Tracing [3.187381965457262]
Knowledge tracing is a field of study that predicts the future performance of students based on prior performance datasets.
MonaCoBERT achieves the best performance on most benchmark datasets and has significant interpretability.
arXiv Detail & Related papers (2022-08-19T00:43:47Z) - An Empirical Investigation of Commonsense Self-Supervision with
Knowledge Graphs [67.23285413610243]
Self-supervision based on the information extracted from large knowledge graphs has been shown to improve the generalization of language models.
We study the effect of knowledge sampling strategies and sizes that can be used to generate synthetic data for adapting language models.
arXiv Detail & Related papers (2022-05-21T19:49:04Z) - Improving Label Quality by Jointly Modeling Items and Annotators [68.8204255655161]
We propose a fully Bayesian framework for learning ground truth labels from noisy annotators.
Our framework ensures scalability by factoring a generative, Bayesian soft clustering model over label distributions into the classic David and Skene joint annotator-data model.
arXiv Detail & Related papers (2021-06-20T02:15:20Z) - Learning by Distillation: A Self-Supervised Learning Framework for
Optical Flow Estimation [71.76008290101214]
DistillFlow is a knowledge distillation approach to learning optical flow.
It achieves state-of-the-art unsupervised learning performance on both KITTI and Sintel datasets.
Our models ranked 1st among all monocular methods on the KITTI 2015 benchmark, and outperform all published methods on the Sintel Final benchmark.
arXiv Detail & Related papers (2021-06-08T09:13:34Z) - Adversarial Infidelity Learning for Model Interpretation [43.37354056251584]
We propose a Model-agnostic Effective Efficient Direct (MEED) IFS framework for model interpretation.
Our framework mitigates concerns about sanity, shortcuts, model identifiability, and information transmission.
Our AIL mechanism can help learn the desired conditional distribution between selected features and targets.
arXiv Detail & Related papers (2020-06-09T16:27:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.