Unified Uncertainty Estimation for Cognitive Diagnosis Models
- URL: http://arxiv.org/abs/2403.14676v1
- Date: Sat, 9 Mar 2024 13:48:20 GMT
- Title: Unified Uncertainty Estimation for Cognitive Diagnosis Models
- Authors: Fei Wang, Qi Liu, Enhong Chen, Chuanren Liu, Zhenya Huang, Jinze Wu, Shijin Wang,
- Abstract summary: We propose a unified uncertainty estimation approach for a wide range of cognitive diagnosis models.
We decompose the uncertainty of diagnostic parameters into data aspect and model aspect.
Our method is effective and can provide useful insights into the uncertainty of cognitive diagnosis.
- Score: 70.46998436898205
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cognitive diagnosis models have been widely used in different areas, especially intelligent education, to measure users' proficiency levels on knowledge concepts, based on which users can get personalized instructions. As the measurement is not always reliable due to the weak links of the models and data, the uncertainty of measurement also offers important information for decisions. However, the research on the uncertainty estimation lags behind that on advanced model structures for cognitive diagnosis. Existing approaches have limited efficiency and leave an academic blank for sophisticated models which have interaction function parameters (e.g., deep learning-based models). To address these problems, we propose a unified uncertainty estimation approach for a wide range of cognitive diagnosis models. Specifically, based on the idea of estimating the posterior distributions of cognitive diagnosis model parameters, we first provide a unified objective function for mini-batch based optimization that can be more efficiently applied to a wide range of models and large datasets. Then, we modify the reparameterization approach in order to adapt to parameters defined on different domains. Furthermore, we decompose the uncertainty of diagnostic parameters into data aspect and model aspect, which better explains the source of uncertainty. Extensive experiments demonstrate that our method is effective and can provide useful insights into the uncertainty of cognitive diagnosis.
Related papers
- Unsupervised Model Diagnosis [49.36194740479798]
This paper proposes Unsupervised Model Diagnosis (UMO) to produce semantic counterfactual explanations without any user guidance.
Our approach identifies and visualizes changes in semantics, and then matches these changes to attributes from wide-ranging text sources.
arXiv Detail & Related papers (2024-10-08T17:59:03Z) - A Survey of Models for Cognitive Diagnosis: New Developments and Future Directions [66.40362209055023]
This paper aims to provide a survey of current models for cognitive diagnosis, with more attention on new developments using machine learning-based methods.
By comparing the model structures, parameter estimation algorithms, model evaluation methods and applications, we provide a relatively comprehensive review of the recent trends in cognitive diagnosis models.
arXiv Detail & Related papers (2024-07-07T18:02:00Z) - Decoding Decision Reasoning: A Counterfactual-Powered Model for Knowledge Discovery [6.1521675665532545]
In medical imaging, discerning the rationale behind an AI model's predictions is crucial for evaluating its reliability.
We propose an explainable model that is equipped with both decision reasoning and feature identification capabilities.
By implementing our method, we can efficiently identify and visualise class-specific features leveraged by the data-driven model.
arXiv Detail & Related papers (2024-05-23T19:00:38Z) - Benchmarking Scalable Epistemic Uncertainty Quantification in Organ
Segmentation [7.313010190714819]
quantifying uncertainty associated with model predictions is crucial in critical clinical applications.
Deep learning based methods for automatic organ segmentation have shown promise in aiding diagnosis and treatment planning.
It is unclear which method is preferred in the medical image analysis setting.
arXiv Detail & Related papers (2023-08-15T00:09:33Z) - Measuring and Modeling Uncertainty Degree for Monocular Depth Estimation [50.920911532133154]
The intrinsic ill-posedness and ordinal-sensitive nature of monocular depth estimation (MDE) models pose major challenges to the estimation of uncertainty degree.
We propose to model the uncertainty of MDE models from the perspective of the inherent probability distributions.
By simply introducing additional training regularization terms, our model, with surprisingly simple formations and without requiring extra modules or multiple inferences, can provide uncertainty estimations with state-of-the-art reliability.
arXiv Detail & Related papers (2023-07-19T12:11:15Z) - Topological Interpretability for Deep-Learning [0.30806551485143496]
Deep learning (DL) models cannot quantify the certainty of their predictions.
This work presents a method to infer prominent features in two DL classification models trained on clinical and non-clinical text.
arXiv Detail & Related papers (2023-05-15T13:38:13Z) - A Meta-heuristic Approach to Estimate and Explain Classifier Uncertainty [0.4264192013842096]
This work proposes a set of class-independent meta-heuristics that can characterize the complexity of an instance in terms of factors are mutually relevant to both human and machine learning decision-making.
The proposed measures and framework hold promise for improving model development for more complex instances, as well as providing a new means of model abstention and explanation.
arXiv Detail & Related papers (2023-04-20T13:09:28Z) - Towards Reliable Medical Image Segmentation by utilizing Evidential Calibrated Uncertainty [52.03490691733464]
We introduce DEviS, an easily implementable foundational model that seamlessly integrates into various medical image segmentation networks.
By leveraging subjective logic theory, we explicitly model probability and uncertainty for the problem of medical image segmentation.
DeviS incorporates an uncertainty-aware filtering module, which utilizes the metric of uncertainty-calibrated error to filter reliable data.
arXiv Detail & Related papers (2023-01-01T05:02:46Z) - NeuralSympCheck: A Symptom Checking and Disease Diagnostic Neural Model
with Logic Regularization [59.15047491202254]
symptom checking systems inquire users for their symptoms and perform a rapid and affordable medical assessment of their condition.
We propose a new approach based on the supervised learning of neural models with logic regularization.
Our experiments show that the proposed approach outperforms the best existing methods in the accuracy of diagnosis when the number of diagnoses and symptoms is large.
arXiv Detail & Related papers (2022-06-02T07:57:17Z) - Uncertainty aware and explainable diagnosis of retinal disease [0.0]
We perform uncertainty analysis of a deep learning model for diagnosis of four retinal diseases.
We show the features that a system used to make prediction while uncertainty awareness is the ability of a system to highlight when it is not sure about the decision.
arXiv Detail & Related papers (2021-01-26T23:37:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.