Optimizing Student Ability Assessment: A Hierarchy Constraint-Aware Cognitive Diagnosis Framework for Educational Contexts
- URL: http://arxiv.org/abs/2412.04488v1
- Date: Thu, 21 Nov 2024 11:37:36 GMT
- Title: Optimizing Student Ability Assessment: A Hierarchy Constraint-Aware Cognitive Diagnosis Framework for Educational Contexts
- Authors: Xinjie Sun, Qi Liu, Kai Zhang, Shuanghong Shen, Fei Wang, Yan Zhuang, Zheng Zhang, Weiyin Gong, Shijin Wang, Lina Yang, Xingying Huo,
- Abstract summary: We propose the Hierarchy Constraint-Aware Cognitive Diagnosis Framework (HCD)
HCD aims to more accurately represent student ability performance within real educational contexts.
It integrates hierarchy constraint perception features with existing models, improving the representation of both individual and group characteristics.
- Score: 26.287186220848763
- License:
- Abstract: Cognitive diagnosis (CD) aims to reveal students' proficiency in specific knowledge concepts. With the increasing adoption of intelligent education applications, accurately assessing students' knowledge mastery has become an urgent challenge. Although existing cognitive diagnosis frameworks enhance diagnostic accuracy by analyzing students' explicit response records, they primarily focus on individual knowledge state, failing to adequately reflect the relative ability performance of students within hierarchies. To address this, we propose the Hierarchy Constraint-Aware Cognitive Diagnosis Framework (HCD), designed to more accurately represent student ability performance within real educational contexts. Specifically, the framework introduces a hierarchy mapping layer to identify students' levels. It then employs a hierarchy convolution-enhanced attention layer for in-depth analysis of knowledge concepts performance among students at the same level, uncovering nuanced differences. A hierarchy inter-sampling attention layer captures performance differences across hierarchies, offering a comprehensive understanding of the relationships among students' knowledge state. Finally, through personalized diagnostic enhancement, the framework integrates hierarchy constraint perception features with existing models, improving the representation of both individual and group characteristics. This approach enables precise inference of students' knowledge state. Research shows that this framework not only reasonably constrains changes in students' knowledge states to align with real educational settings, but also supports the scientific rigor and fairness of educational assessments, thereby advancing the field of cognitive diagnosis.
Related papers
- Concept-Aware Latent and Explicit Knowledge Integration for Enhanced Cognitive Diagnosis [13.781755345440914]
We propose a Concept-aware Latent and Explicit Knowledge Integration model for cognitive diagnosis (CLEKI-CD)
Specifically, a multidimensional vector is constructed according to the students' mastery and exercise difficulty for each knowledge concept from multiple perspectives.
We employ a combined cognitive diagnosis layer to integrate both latent and explicit knowledge, further enhancing cognitive diagnosis performance.
arXiv Detail & Related papers (2025-02-04T08:37:16Z) - Enhancing Cognitive Diagnosis by Modeling Learner Cognitive Structure State [3.0103051895985256]
Theoretically, an individual's cognitive state is essentially equivalent to their cognitive structure state.
A learner's cognitive structure is essential for promoting meaningful learning and shaping academic performance.
arXiv Detail & Related papers (2024-12-27T17:41:39Z) - Inductive Cognitive Diagnosis for Fast Student Learning in Web-Based Online Intelligent Education Systems [9.907875032214996]
This paper proposes an inductive cognitive diagnosis model (ICDM) for fast new students' mastery levels inference in WOIESs.
To obtain this representation, ICDM consists of a construction-aggregation-generation-transformation process.
Experiments across real-world datasets show that, compared with the existing cognitive diagnosis methods that are always transductive, ICDM is much more faster.
arXiv Detail & Related papers (2024-04-17T11:55:43Z) - ReliCD: A Reliable Cognitive Diagnosis Framework with Confidence
Awareness [26.60714613122676]
Existing approaches often suffer from the issue of overconfidence in predicting students' mastery levels.
We propose a novel Reliable Cognitive Diagnosis(ReliCD) framework, which can quantify the confidence of the diagnosis feedback.
arXiv Detail & Related papers (2023-12-29T07:30:58Z) - NeuroExplainer: Fine-Grained Attention Decoding to Uncover Cortical
Development Patterns of Preterm Infants [73.85768093666582]
We propose an explainable geometric deep network dubbed NeuroExplainer.
NeuroExplainer is used to uncover altered infant cortical development patterns associated with preterm birth.
arXiv Detail & Related papers (2023-01-01T12:48:12Z) - Learning Knowledge Representation with Meta Knowledge Distillation for
Single Image Super-Resolution [82.89021683451432]
We propose a model-agnostic meta knowledge distillation method under the teacher-student architecture for the single image super-resolution task.
Experiments conducted on various single image super-resolution datasets demonstrate that our proposed method outperforms existing defined knowledge representation related distillation methods.
arXiv Detail & Related papers (2022-07-18T02:41:04Z) - UIILD: A Unified Interpretable Intelligent Learning Diagnosis Framework
for Intelligent Tutoring Systems [8.354034992258482]
The proposed unified interpretable intelligent learning diagnosis (UIILD) framework benefits from the powerful representation learning ability of deep learning and the interpretability of psychometrics.
Within the proposed framework, this paper presents a two-channel learning diagnosis mechanism LDM-ID as well as a three-channel learning diagnosis mechanism LDM-HMI.
arXiv Detail & Related papers (2022-07-07T07:04:22Z) - Generalized Knowledge Distillation via Relationship Matching [53.69235109551099]
Knowledge of a well-trained deep neural network (a.k.a. the "teacher") is valuable for learning similar tasks.
Knowledge distillation extracts knowledge from the teacher and integrates it with the target model.
Instead of enforcing the teacher to work on the same task as the student, we borrow the knowledge from a teacher trained from a general label space.
arXiv Detail & Related papers (2022-05-04T06:49:47Z) - Spectrum-Guided Adversarial Disparity Learning [52.293230153385124]
We propose a novel end-to-end knowledge directed adversarial learning framework.
It portrays the class-conditioned intraclass disparity using two competitive encoding distributions and learns the purified latent codes by denoising learned disparity.
The experiments on four HAR benchmark datasets demonstrate the robustness and generalization of our proposed methods over a set of state-of-the-art.
arXiv Detail & Related papers (2020-07-14T05:46:27Z) - Dual Policy Distillation [58.43610940026261]
Policy distillation, which transfers a teacher policy to a student policy, has achieved great success in challenging tasks of deep reinforcement learning.
In this work, we introduce dual policy distillation(DPD), a student-student framework in which two learners operate on the same environment to explore different perspectives of the environment.
The key challenge in developing this dual learning framework is to identify the beneficial knowledge from the peer learner for contemporary learning-based reinforcement learning algorithms.
arXiv Detail & Related papers (2020-06-07T06:49:47Z) - Hierarchical Reinforcement Learning for Automatic Disease Diagnosis [52.111516253474285]
We propose to integrate a hierarchical policy structure of two levels into the dialogue systemfor policy learning.
The proposed policy structure is capable to deal with diagnosis problem including large number of diseases and symptoms.
arXiv Detail & Related papers (2020-04-29T15:02:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.