Auxiliary Task Guided Interactive Attention Model for Question
Difficulty Prediction
- URL: http://arxiv.org/abs/2207.01494v1
- Date: Tue, 24 May 2022 19:55:30 GMT
- Title: Auxiliary Task Guided Interactive Attention Model for Question
Difficulty Prediction
- Authors: Venktesh V, Md. Shad Akhtar, Mukesh Mohania and Vikram Goyal
- Abstract summary: We propose a multi-task method with an interactive attention mechanism, Qdiff, for jointly predicting Bloom's taxonomy and difficulty levels of academic questions.
The proposed learning method would help learn representations that capture the relationship between Bloom's taxonomy and difficulty labels.
- Score: 6.951136079043972
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online learning platforms conduct exams to evaluate the learners in a
monotonous way, where the questions in the database may be classified into
Bloom's Taxonomy as varying levels in complexity from basic knowledge to
advanced evaluation. The questions asked in these exams to all learners are
very much static. It becomes important to ask new questions with different
difficulty levels to each learner to provide a personalized learning
experience. In this paper, we propose a multi-task method with an interactive
attention mechanism, Qdiff, for jointly predicting Bloom's Taxonomy and
difficulty levels of academic questions. We model the interaction between the
predicted bloom taxonomy representations and the input representations using an
attention mechanism to aid in difficulty prediction. The proposed learning
method would help learn representations that capture the relationship between
Bloom's taxonomy and difficulty labels. The proposed multi-task method learns a
good input representation by leveraging the relationship between the related
tasks and can be used in similar settings where the tasks are related. The
results demonstrate that the proposed method performs better than training only
on difficulty prediction. However, Bloom's labels may not always be given for
some datasets. Hence we soft label another dataset with a model fine-tuned to
predict Bloom's labels to demonstrate the applicability of our method to
datasets with only difficulty labels.
Related papers
- Preview-based Category Contrastive Learning for Knowledge Distillation [53.551002781828146]
We propose a novel preview-based category contrastive learning method for knowledge distillation (PCKD)
It first distills the structural knowledge of both instance-level feature correspondence and the relation between instance features and category centers.
It can explicitly optimize the category representation and explore the distinct correlation between representations of instances and categories.
arXiv Detail & Related papers (2024-10-18T03:31:00Z) - Multi-Label Knowledge Distillation [86.03990467785312]
We propose a novel multi-label knowledge distillation method.
On one hand, it exploits the informative semantic knowledge from the logits by dividing the multi-label learning problem into a set of binary classification problems.
On the other hand, it enhances the distinctiveness of the learned feature representations by leveraging the structural information of label-wise embeddings.
arXiv Detail & Related papers (2023-08-12T03:19:08Z) - Association Graph Learning for Multi-Task Classification with Category
Shifts [68.58829338426712]
We focus on multi-task classification, where related classification tasks share the same label space and are learned simultaneously.
We learn an association graph to transfer knowledge among tasks for missing classes.
Our method consistently performs better than representative baselines.
arXiv Detail & Related papers (2022-10-10T12:37:41Z) - Comparing Text Representations: A Theory-Driven Approach [2.893558866535708]
We adapt general tools from computational learning theory to fit the specific characteristics of text datasets.
We present a method to evaluate the compatibility between representations and tasks.
This method provides a calibrated, quantitative measure of the difficulty of a classification-based NLP task.
arXiv Detail & Related papers (2021-09-15T17:48:19Z) - TagRec: Automated Tagging of Questions with Hierarchical Learning
Taxonomy [0.0]
Online educational platforms organize academic questions based on a hierarchical learning taxonomy (subject-chapter-topic)
This paper formulates the problem as a similarity-based retrieval task where we optimize the semantic relatedness between the taxonomy and the questions.
We demonstrate that our method helps to handle the unseen labels and hence can be used for taxonomy tagging in the wild.
arXiv Detail & Related papers (2021-07-03T11:50:55Z) - Learning with Instance Bundles for Reading Comprehension [61.823444215188296]
We introduce new supervision techniques that compare question-answer scores across multiple related instances.
Specifically, we normalize these scores across various neighborhoods of closely contrasting questions and/or answers.
We empirically demonstrate the effectiveness of training with instance bundles on two datasets.
arXiv Detail & Related papers (2021-04-18T06:17:54Z) - Curriculum Learning: A Survey [65.31516318260759]
Curriculum learning strategies have been successfully employed in all areas of machine learning.
We construct a taxonomy of curriculum learning approaches by hand, considering various classification criteria.
We build a hierarchical tree of curriculum learning methods using an agglomerative clustering algorithm.
arXiv Detail & Related papers (2021-01-25T20:08:32Z) - A Survey on Deep Learning with Noisy Labels: How to train your model
when you cannot trust on the annotations? [21.562089974755125]
Several approaches have been proposed to improve the training of deep learning models in the presence of noisy labels.
This paper presents a survey on the main techniques in literature, in which we classify the algorithm in the following groups: robust losses, sample weighting, sample selection, meta-learning, and combined approaches.
arXiv Detail & Related papers (2020-12-05T15:45:20Z) - Efficient PAC Learning from the Crowd with Pairwise Comparison [7.594050968868919]
We study the problem of PAC learning threshold functions from the crowd, where the annotators can provide (noisy) labels or pairwise comparison tags.
We design a label-efficient algorithm that interleaves learning and annotation, which leads to a constant overhead of our algorithm.
arXiv Detail & Related papers (2020-11-02T16:37:55Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.