Automating Transfer Credit Assessment in Student Mobility -- A Natural
Language Processing-based Approach
- URL: http://arxiv.org/abs/2104.01955v1
- Date: Mon, 5 Apr 2021 15:14:59 GMT
- Title: Automating Transfer Credit Assessment in Student Mobility -- A Natural
Language Processing-based Approach
- Authors: Dhivya Chandrasekaran and Vijay Mago
- Abstract summary: This research article focuses on identifying a model that exploits the advancements in the field of Natural Language Processing (NLP) to effectively automate this process.
The proposed model uses a clustering-inspired methodology based on knowledge-based semantic similarity measures to assess the taxonomic similarity of learning outcomes (LOs)
The similarity between LOs is further aggregated to form course to course similarity.
- Score: 5.947076788303102
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Student mobility or academic mobility involves students moving between
institutions during their post-secondary education, and one of the challenging
tasks in this process is to assess the transfer credits to be offered to the
incoming student. In general, this process involves domain experts comparing
the learning outcomes of the courses, to decide on offering transfer credits to
the incoming students. This manual implementation is not only labor-intensive
but also influenced by undue bias and administrative complexity. The proposed
research article focuses on identifying a model that exploits the advancements
in the field of Natural Language Processing (NLP) to effectively automate this
process. Given the unique structure, domain specificity, and complexity of
learning outcomes (LOs), a need for designing a tailor-made model arises. The
proposed model uses a clustering-inspired methodology based on knowledge-based
semantic similarity measures to assess the taxonomic similarity of LOs and a
transformer-based semantic similarity model to assess the semantic similarity
of the LOs. The similarity between LOs is further aggregated to form course to
course similarity. Due to the lack of quality benchmark datasets, a new
benchmark dataset containing seven course-to-course similarity measures is
proposed. Understanding the inherent need for flexibility in the
decision-making process the aggregation part of the model offers tunable
parameters to accommodate different scenarios. While providing an efficient
model to assess the similarity between courses with existing resources, this
research work steers future research attempts to apply NLP in the field of
articulation in an ideal direction by highlighting the persisting research
gaps.
Related papers
- Modeling Output-Level Task Relatedness in Multi-Task Learning with Feedback Mechanism [7.479892725446205]
Multi-task learning (MTL) is a paradigm that simultaneously learns multiple tasks by sharing information at different levels.
We introduce a posteriori information into the model, considering that different tasks may produce correlated outputs with mutual influences.
We achieve this by incorporating a feedback mechanism into MTL models, where the output of one task serves as a hidden feature for another task.
arXiv Detail & Related papers (2024-04-01T03:27:34Z) - A Probabilistic Model behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - Causal Coordinated Concurrent Reinforcement Learning [8.654978787096807]
We propose a novel algorithmic framework for data sharing and coordinated exploration for the purpose of learning more data-efficient and better performing policies under a concurrent reinforcement learning setting.
Our algorithm leverages a causal inference algorithm in the form of Additive Noise Model - Mixture Model (ANM-MM) in extracting model parameters governing individual differentials via independence enforcement.
We propose a new data sharing scheme based on a similarity measure of the extracted model parameters and demonstrate superior learning speeds on a set of autoregressive, pendulum and cart-pole swing-up tasks.
arXiv Detail & Related papers (2024-01-31T17:20:28Z) - Differentiable Retrieval Augmentation via Generative Language Modeling
for E-commerce Query Intent Classification [8.59563091603226]
We propose Differentiable Retrieval Augmentation via Generative lANguage modeling(Dragan) to address this problem by a novel differentiable reformulation.
We demonstrate the effectiveness of our proposed method on a challenging NLP task in e-commerce search, namely query intent classification.
arXiv Detail & Related papers (2023-08-18T05:05:35Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Weakly Supervised Semantic Segmentation via Alternative Self-Dual
Teaching [82.71578668091914]
This paper establishes a compact learning framework that embeds the classification and mask-refinement components into a unified deep model.
We propose a novel alternative self-dual teaching (ASDT) mechanism to encourage high-quality knowledge interaction.
arXiv Detail & Related papers (2021-12-17T11:56:56Z) - Transfer Learning Based Co-surrogate Assisted Evolutionary Bi-objective
Optimization for Objectives with Non-uniform Evaluation Times [9.139734850798124]
Multiobjetive evolutionary algorithms assume that each objective function can be evaluated within the same period of time.
A co-surrogate is adopted to model the functional relationship between the fast and slow objective functions.
A transferable instance selection method is introduced to acquire useful knowledge from the search process of the fast objective.
arXiv Detail & Related papers (2021-08-30T16:10:15Z) - A Taxonomy of Similarity Metrics for Markov Decision Processes [62.997667081978825]
In recent years, transfer learning has succeeded in making Reinforcement Learning (RL) algorithms more efficient.
In this paper, we propose a categorization of these metrics and analyze the definitions of similarity proposed so far.
arXiv Detail & Related papers (2021-03-08T12:36:42Z) - Interpretable Multi-dataset Evaluation for Named Entity Recognition [110.64368106131062]
We present a general methodology for interpretable evaluation for the named entity recognition (NER) task.
The proposed evaluation method enables us to interpret the differences in models and datasets, as well as the interplay between them.
By making our analysis tool available, we make it easy for future researchers to run similar analyses and drive progress in this area.
arXiv Detail & Related papers (2020-11-13T10:53:27Z) - Task-Feature Collaborative Learning with Application to Personalized
Attribute Prediction [166.87111665908333]
We propose a novel multi-task learning method called Task-Feature Collaborative Learning (TFCL)
Specifically, we first propose a base model with a heterogeneous block-diagonal structure regularizer to leverage the collaborative grouping of features and tasks.
As a practical extension, we extend the base model by allowing overlapping features and differentiating the hard tasks.
arXiv Detail & Related papers (2020-04-29T02:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.