Survey of Loss Augmented Knowledge Tracing
- URL: http://arxiv.org/abs/2504.15163v1
- Date: Mon, 21 Apr 2025 15:09:40 GMT
- Title: Survey of Loss Augmented Knowledge Tracing
- Authors: Altun Shukurlu,
- Abstract summary: We provide a review of the deep learning-based knowledge tracing (DKT) algorithms trained using advanced loss functions.<n>We discuss contrastive knowledge tracing algorithms, such as Bi-CLKT, CL4KT, SP-CLKT, CoSKT, and prediction-consistent DKT.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The training of artificial neural networks is heavily dependent on the careful selection of an appropriate loss function. While commonly used loss functions, such as cross-entropy and mean squared error (MSE), generally suffice for a broad range of tasks, challenges often emerge due to limitations in data quality or inefficiencies within the learning process. In such circumstances, the integration of supplementary terms into the loss function can serve to address these challenges, enhancing both model performance and robustness. Two prominent techniques, loss regularization and contrastive learning, have been identified as effective strategies for augmenting the capacity of loss functions in artificial neural networks. Knowledge tracing is a compelling area of research that leverages predictive artificial intelligence to facilitate the automation of personalized and efficient educational experiences for students. In this paper, we provide a comprehensive review of the deep learning-based knowledge tracing (DKT) algorithms trained using advanced loss functions and discuss their improvements over prior techniques. We discuss contrastive knowledge tracing algorithms, such as Bi-CLKT, CL4KT, SP-CLKT, CoSKT, and prediction-consistent DKT, providing performance benchmarks and insights into real-world deployment challenges. The survey concludes with future research directions, including hybrid loss strategies and context-aware modeling.
Related papers
- AdvKT: An Adversarial Multi-Step Training Framework for Knowledge Tracing [64.79967583649407]
Knowledge Tracing (KT) monitors students' knowledge states and simulates their responses to question sequences.<n>Existing KT models typically follow a single-step training paradigm, which leads to significant error accumulation.<n>We propose a novel Adversarial Multi-Step Training Framework for Knowledge Tracing (AdvKT) which focuses on the multi-step KT task.
arXiv Detail & Related papers (2025-04-07T03:31:57Z) - Loss Functions in Deep Learning: A Comprehensive Review [3.8001666556614446]
Loss functions are at the heart of deep learning, shaping how models learn and perform across diverse tasks.<n>This paper presents a comprehensive review of loss functions, covering fundamental metrics like Mean Squared Error and Cross-Entropy to advanced functions such as Adversarial and Diffusion losses.
arXiv Detail & Related papers (2025-04-05T18:07:20Z) - Learning for Cross-Layer Resource Allocation in MEC-Aided Cell-Free Networks [71.30914500714262]
Cross-layer resource allocation over mobile edge computing (MEC)-aided cell-free networks can sufficiently exploit the transmitting and computing resources to promote the data rate.<n>Joint subcarrier allocation and beamforming optimization are investigated for the MEC-aided cell-free network from the perspective of deep learning.
arXiv Detail & Related papers (2024-12-21T10:18:55Z) - Hyperspectral Image Analysis in Single-Modal and Multimodal setting
using Deep Learning Techniques [1.2328446298523066]
Hyperspectral imaging provides precise classification for land use and cover due to its exceptional spectral resolution.
However, the challenges of high dimensionality and limited spatial resolution hinder its effectiveness.
This study addresses these challenges by employing deep learning techniques to efficiently process, extract features, and classify data in an integrated manner.
arXiv Detail & Related papers (2024-03-03T15:47:43Z) - Fast and Efficient Local Search for Genetic Programming Based Loss
Function Learning [12.581217671500887]
We propose a new meta-learning framework for task and model-agnostic loss function learning via a hybrid search approach.
Results show that the learned loss functions bring improved convergence, sample efficiency, and inference performance on tabulated, computer vision, and natural language processing problems.
arXiv Detail & Related papers (2024-03-01T02:20:04Z) - RoBoSS: A Robust, Bounded, Sparse, and Smooth Loss Function for
Supervised Learning [0.0]
We propose a novel robust, bounded, sparse, and smooth (RoBoSS) loss function for supervised learning.
We introduce a new robust algorithm named $mathcalL_rbss$-SVM to generalize well to unseen data.
We evaluate the proposed $mathcalL_rbss$-SVM on $88$ real-world UCI and KEEL datasets from diverse domains.
arXiv Detail & Related papers (2023-09-05T13:59:50Z) - Effect of Choosing Loss Function when Using T-batching for
Representation Learning on Dynamic Networks [0.0]
T-batching is a valuable technique for training dynamic network models.
We have identified a limitation in the training loss function used with t-batching.
We propose two alternative loss functions that overcome these issues, resulting in enhanced training performance.
arXiv Detail & Related papers (2023-08-13T23:34:36Z) - Evaluating the structure of cognitive tasks with transfer learning [67.22168759751541]
This study investigates the transferability of deep learning representations between different EEG decoding tasks.
We conduct extensive experiments using state-of-the-art decoding models on two recently released EEG datasets.
arXiv Detail & Related papers (2023-07-28T14:51:09Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z) - Learning Adaptive Loss for Robust Learning with Noisy Labels [59.06189240645958]
Robust loss is an important strategy for handling robust learning issue.
We propose a meta-learning method capable of robust hyper tuning.
Four kinds of SOTA loss functions are attempted to be minimization, general availability and effectiveness.
arXiv Detail & Related papers (2020-02-16T00:53:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.