Self-supervised Graph Learning for Long-tailed Cognitive Diagnosis
- URL: http://arxiv.org/abs/2210.08169v1
- Date: Sat, 15 Oct 2022 02:57:09 GMT
- Title: Self-supervised Graph Learning for Long-tailed Cognitive Diagnosis
- Authors: Shanshan Wang, Zhen Zeng, Xun Yang, Xingyi Zhang
- Abstract summary: We propose a Self-supervised Cognitive Diagnosis (SCD) framework to assist the graph-based cognitive diagnosis.
Specifically, we came up with a graph confusion method that drops edges under some special rules to generate different sparse views of the graph.
- Score: 25.78814557029563
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cognitive diagnosis is a fundamental yet critical research task in the field
of intelligent education, which aims to discover the proficiency level of
different students on specific knowledge concepts. Despite the effectiveness of
existing efforts, previous methods always considered the mastery level on the
whole students, so they still suffer from the Long Tail Effect. A large number
of students who have sparse data are performed poorly in the model. To relieve
the situation, we proposed a Self-supervised Cognitive Diagnosis (SCD)
framework which leverages the self-supervised manner to assist the graph-based
cognitive diagnosis, then the performance on those students with sparse data
can be improved. Specifically, we came up with a graph confusion method that
drops edges under some special rules to generate different sparse views of the
graph. By maximizing the consistency of the representation on the same node
under different views, the model could be more focused on long-tailed students.
Additionally, we proposed an importance-based view generation rule to improve
the influence of long-tailed students. Extensive experiments on real-world
datasets show the effectiveness of our approach, especially on the students
with sparse data.
Related papers
- Granularity Matters in Long-Tail Learning [62.30734737735273]
We offer a novel perspective on long-tail learning, inspired by an observation: datasets with finer granularity tend to be less affected by data imbalance.
We introduce open-set auxiliary classes that are visually similar to existing ones, aiming to enhance representation learning for both head and tail classes.
To prevent the overwhelming presence of auxiliary classes from disrupting training, we introduce a neighbor-silencing loss.
arXiv Detail & Related papers (2024-10-21T13:06:21Z) - Disentangled Generative Graph Representation Learning [51.59824683232925]
This paper introduces DiGGR (Disentangled Generative Graph Representation Learning), a self-supervised learning framework.
It aims to learn latent disentangled factors and utilize them to guide graph mask modeling.
Experiments on 11 public datasets for two different graph learning tasks demonstrate that DiGGR consistently outperforms many previous self-supervised methods.
arXiv Detail & Related papers (2024-08-24T05:13:02Z) - Imbalanced Graph-Level Anomaly Detection via Counterfactual Augmentation and Feature Learning [1.3756846638796]
We propose an imbalanced GLAD method via counterfactual augmentation and feature learning.
We apply the model to brain disease datasets, which can prove the capability of our work.
arXiv Detail & Related papers (2024-07-13T13:40:06Z) - Improving Cognitive Diagnosis Models with Adaptive Relational Graph Neural Networks [33.76551090755183]
Cognitive Diagnosis (CD) algorithms assist students by inferring their abilities on various knowledge concepts.
Recently, researchers have found that building and incorporating a student-exercise bipartite graph is beneficial for enhancing diagnostic performance.
We propose Adaptive Semantic-aware Graph-based Cognitive Diagnosis model (ASG-CD), which introduces a novel and effective way to leverage bipartite graph information in CD.
arXiv Detail & Related papers (2024-02-15T14:12:38Z) - Sensitivity, Performance, Robustness: Deconstructing the Effect of
Sociodemographic Prompting [64.80538055623842]
sociodemographic prompting is a technique that steers the output of prompt-based models towards answers that humans with specific sociodemographic profiles would give.
We show that sociodemographic information affects model predictions and can be beneficial for improving zero-shot learning in subjective NLP tasks.
arXiv Detail & Related papers (2023-09-13T15:42:06Z) - GIF: A General Graph Unlearning Strategy via Influence Function [63.52038638220563]
Graph Influence Function (GIF) is a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $epsilon$-mass perturbation in deleted data.
We conduct extensive experiments on four representative GNN models and three benchmark datasets to justify GIF's superiority in terms of unlearning efficacy, model utility, and unlearning efficiency.
arXiv Detail & Related papers (2023-04-06T03:02:54Z) - Self-supervised Representation Learning on Electronic Health Records
with Graph Kernel Infomax [4.133378723518227]
We propose Graph Kernel Infomax, a self-supervised graph kernel learning approach on the graphical representation of EHR.
Unlike the state-of-the-art, we do not change the graph structure to construct augmented views.
Our approach yields performance on clinical downstream tasks that exceeds the state-of-the-art.
arXiv Detail & Related papers (2022-09-01T16:15:08Z) - A Survey on Long-Tailed Visual Recognition [13.138929184395423]
We focus on the problems caused by long-tailed data distribution, sort out the representative long-tailed visual recognition datasets and summarize some mainstream long-tailed studies.
Based on the Gini coefficient, we quantitatively study 20 widely-used and large-scale visual datasets proposed in the last decade.
arXiv Detail & Related papers (2022-05-27T06:22:55Z) - Graph Self-supervised Learning with Accurate Discrepancy Learning [64.69095775258164]
We propose a framework that aims to learn the exact discrepancy between the original and the perturbed graphs, coined as Discrepancy-based Self-supervised LeArning (D-SLA)
We validate our method on various graph-related downstream tasks, including molecular property prediction, protein function prediction, and link prediction tasks, on which our model largely outperforms relevant baselines.
arXiv Detail & Related papers (2022-02-07T08:04:59Z) - Iterative Graph Self-Distillation [161.04351580382078]
We propose a novel unsupervised graph learning paradigm called Iterative Graph Self-Distillation (IGSD)
IGSD iteratively performs the teacher-student distillation with graph augmentations.
We show that we achieve significant and consistent performance gain on various graph datasets in both unsupervised and semi-supervised settings.
arXiv Detail & Related papers (2020-10-23T18:37:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.