Language Representation Favored Zero-Shot Cross-Domain Cognitive Diagnosis
- URL: http://arxiv.org/abs/2501.13943v1
- Date: Sat, 18 Jan 2025 03:35:44 GMT
- Title: Language Representation Favored Zero-Shot Cross-Domain Cognitive Diagnosis
- Authors: Shuo Liu, Zihan Zhou, Yuanhao Liu, Jing Zhang, Hong Qian,
- Abstract summary: This paper proposes the language representation favored zero-shot cross-domain cognitive diagnosis (LRCD)
LRCD first analyzes the behavior patterns of students, exercises and concepts in different domains, and then describes the profiles of students, exercises and concepts using textual descriptions.
To address the discrepancy between the language space and the cognitive diagnosis space, we propose language-cognitive mappers in LRCD to learn the mapping from the former to the latter.
- Score: 15.006031265076006
- License:
- Abstract: Cognitive diagnosis aims to infer students' mastery levels based on their historical response logs. However, existing cognitive diagnosis models (CDMs), which rely on ID embeddings, often have to train specific models on specific domains. This limitation may hinder their directly practical application in various target domains, such as different subjects (e.g., Math, English and Physics) or different education platforms (e.g., ASSISTments, Junyi Academy and Khan Academy). To address this issue, this paper proposes the language representation favored zero-shot cross-domain cognitive diagnosis (LRCD). Specifically, LRCD first analyzes the behavior patterns of students, exercises and concepts in different domains, and then describes the profiles of students, exercises and concepts using textual descriptions. Via recent advanced text-embedding modules, these profiles can be transformed to vectors in the unified language space. Moreover, to address the discrepancy between the language space and the cognitive diagnosis space, we propose language-cognitive mappers in LRCD to learn the mapping from the former to the latter. Then, these profiles can be easily and efficiently integrated and trained with existing CDMs. Extensive experiments show that training LRCD on real-world datasets can achieve commendable zero-shot performance across different target domains, and in some cases, it can even achieve competitive performance with some classic CDMs trained on the full response data on target domains. Notably, we surprisingly find that LRCD can also provide interesting insights into the differences between various subjects (such as humanities and sciences) and sources (such as primary and secondary education).
Related papers
- Knowledge is Power: Harnessing Large Language Models for Enhanced Cognitive Diagnosis [12.936153018855649]
Cognitive Diagnosis Models (CDMs) are designed to assess students' cognitive states by analyzing their performance across a series of exercises.
Existing CDMs often struggle with diagnosing infrequent students and exercises due to a lack of rich prior knowledge.
With the advancement in large language models (LLMs), their integration into cognitive diagnosis presents a promising opportunity.
arXiv Detail & Related papers (2025-02-08T13:02:45Z) - A Dual-Fusion Cognitive Diagnosis Framework for Open Student Learning Environments [10.066184572184627]
This paper proposes a dual-fusion cognitive diagnosis framework (DFCD) to address the challenge of aligning two different modalities.
Experiments show that DFCD achieves superior performance by integrating different modalities and strong adaptability in open student learning environments.
arXiv Detail & Related papers (2024-10-19T10:12:02Z) - Unified Language-driven Zero-shot Domain Adaptation [55.64088594551629]
Unified Language-driven Zero-shot Domain Adaptation (ULDA) is a novel task setting.
It enables a single model to adapt to diverse target domains without explicit domain-ID knowledge.
arXiv Detail & Related papers (2024-04-10T16:44:11Z) - Language Guided Domain Generalized Medical Image Segmentation [68.93124785575739]
Single source domain generalization holds promise for more reliable and consistent image segmentation across real-world clinical settings.
We propose an approach that explicitly leverages textual information by incorporating a contrastive learning mechanism guided by the text encoder features.
Our approach achieves favorable performance against existing methods in literature.
arXiv Detail & Related papers (2024-04-01T17:48:15Z) - Zero-1-to-3: Domain-level Zero-shot Cognitive Diagnosis via One Batch of
Early-bird Students towards Three Diagnostic Objectives [16.964558645359862]
This paper focuses on domain-level zero-shot cognitive diagnosis (DZCD)
Recent cross-domain diagnostic models have been demonstrated to be a promising strategy for DZCD.
We propose Zero-1-to-3, a domain-level zero-shot cognitive diagnosis framework via one batch of early-bird students.
arXiv Detail & Related papers (2023-12-20T21:20:23Z) - Adapt in Contexts: Retrieval-Augmented Domain Adaptation via In-Context
Learning [48.22913073217633]
Large language models (LLMs) have showcased their capability with few-shot inference known as in-context learning.
In this paper, we study the UDA problem under an in-context learning setting to adapt language models from the source domain to the target domain without any target labels.
We devise different prompting and training strategies, accounting for different LM architectures to learn the target distribution via language modeling.
arXiv Detail & Related papers (2023-11-20T06:06:20Z) - CDFSL-V: Cross-Domain Few-Shot Learning for Videos [58.37446811360741]
Few-shot video action recognition is an effective approach to recognizing new categories with only a few labeled examples.
Existing methods in video action recognition rely on large labeled datasets from the same domain.
We propose a novel cross-domain few-shot video action recognition method that leverages self-supervised learning and curriculum learning.
arXiv Detail & Related papers (2023-09-07T19:44:27Z) - Cross-domain Imitation from Observations [50.669343548588294]
Imitation learning seeks to circumvent the difficulty in designing proper reward functions for training agents by utilizing expert behavior.
In this paper, we study the problem of how to imitate tasks when there exist discrepancies between the expert and agent MDP.
We present a novel framework to learn correspondences across such domains.
arXiv Detail & Related papers (2021-05-20T21:08:25Z) - Cross-domain Face Presentation Attack Detection via Multi-domain
Disentangled Representation Learning [109.42987031347582]
Face presentation attack detection (PAD) has been an urgent problem to be solved in the face recognition systems.
We propose an efficient disentangled representation learning for cross-domain face PAD.
Our approach consists of disentangled representation learning (DR-Net) and multi-domain learning (MD-Net)
arXiv Detail & Related papers (2020-04-04T15:45:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.