cRedAnno+: Annotation Exploitation in Self-Explanatory Lung Nodule
Diagnosis
- URL: http://arxiv.org/abs/2210.16097v1
- Date: Fri, 28 Oct 2022 12:44:31 GMT
- Title: cRedAnno+: Annotation Exploitation in Self-Explanatory Lung Nodule
Diagnosis
- Authors: Jiahao Lu, Chong Yin, Kenny Erleben, Michael Bachmann Nielsen, Sune
Darkner
- Abstract summary: cRedAnno achieves competitive performance with considerably reduced annotation needs.
We propose an annotation exploitation mechanism by conducting semi-supervised active learning.
The proposed approach achieves comparable or even higher malignancy prediction accuracy with 10x fewer annotations.
- Score: 8.582182186207671
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recently, attempts have been made to reduce annotation requirements in
feature-based self-explanatory models for lung nodule diagnosis. As a
representative, cRedAnno achieves competitive performance with considerably
reduced annotation needs by introducing self-supervised contrastive learning to
do unsupervised feature extraction. However, it exhibits unstable performance
under scarce annotation conditions. To improve the accuracy and robustness of
cRedAnno, we propose an annotation exploitation mechanism by conducting
semi-supervised active learning in the learned semantically meaningful space to
jointly utilise the extracted features, annotations, and unlabelled data. The
proposed approach achieves comparable or even higher malignancy prediction
accuracy with 10x fewer annotations, meanwhile showing better robustness and
nodule attribute prediction accuracy. Our complete code is open-source
available: https://github.com/diku-dk/credanno.
Related papers
- Predicting generalization performance with correctness discriminators [64.00420578048855]
We present a novel model that establishes upper and lower bounds on the accuracy, without requiring gold labels for the unseen data.
We show across a variety of tagging, parsing, and semantic parsing tasks that the gold accuracy is reliably between the predicted upper and lower bounds.
arXiv Detail & Related papers (2023-11-15T22:43:42Z) - XAL: EXplainable Active Learning Makes Classifiers Better Low-resource Learners [71.8257151788923]
We propose a novel Explainable Active Learning framework (XAL) for low-resource text classification.
XAL encourages classifiers to justify their inferences and delve into unlabeled data for which they cannot provide reasonable explanations.
Experiments on six datasets show that XAL achieves consistent improvement over 9 strong baselines.
arXiv Detail & Related papers (2023-10-09T08:07:04Z) - Don't Miss Out on Novelty: Importance of Novel Features for Deep Anomaly
Detection [64.21963650519312]
Anomaly Detection (AD) is a critical task that involves identifying observations that do not conform to a learned model of normality.
We propose a novel approach to AD using explainability to capture such novel features as unexplained observations in the input space.
Our approach establishes a new state-of-the-art across multiple benchmarks, handling diverse anomaly types.
arXiv Detail & Related papers (2023-10-01T21:24:05Z) - Feature Separation and Recalibration for Adversarial Robustness [18.975320671203132]
We propose a novel, easy-to- verify approach named Feature Separation and Recalibration.
It recalibrates the malicious, non-robust activations for more robust feature maps through Separation and Recalibration.
It improves the robustness of existing adversarial training methods by up to 8.57% with small computational overhead.
arXiv Detail & Related papers (2023-03-24T07:43:57Z) - Self-supervised debiasing using low rank regularization [59.84695042540525]
Spurious correlations can cause strong biases in deep neural networks, impairing generalization ability.
We propose a self-supervised debiasing framework potentially compatible with unlabeled samples.
Remarkably, the proposed debiasing framework significantly improves the generalization performance of self-supervised learning baselines.
arXiv Detail & Related papers (2022-10-11T08:26:19Z) - Reducing Annotation Need in Self-Explanatory Models for Lung Nodule
Diagnosis [10.413504599164106]
We propose cRedAnno, a data-/annotation-efficient self-explanatory approach for lung nodule diagnosis.
cRedAnno considerably reduces the annotation need by introducing self-supervised contrastive learning.
Visualisation of the learned space indicates that the correlation between the clustering of malignancy and nodule attributes coincides with clinical knowledge.
arXiv Detail & Related papers (2022-06-27T20:01:41Z) - Improving the Adversarial Robustness of NLP Models by Information
Bottleneck [112.44039792098579]
Non-robust features can be easily manipulated by adversaries to fool NLP models.
In this study, we explore the feasibility of capturing task-specific robust features, while eliminating the non-robust ones by using the information bottleneck theory.
We show that the models trained with our information bottleneck-based method are able to achieve a significant improvement in robust accuracy.
arXiv Detail & Related papers (2022-06-11T12:12:20Z) - Spread Spurious Attribute: Improving Worst-group Accuracy with Spurious
Attribute Estimation [72.92329724600631]
We propose a pseudo-attribute-based algorithm, coined Spread Spurious Attribute, for improving the worst-group accuracy.
Our experiments on various benchmark datasets show that our algorithm consistently outperforms the baseline methods.
We also demonstrate that the proposed SSA can achieve comparable performances to methods using full (100%) spurious attribute supervision.
arXiv Detail & Related papers (2022-04-05T09:08:30Z) - Towards Explainable End-to-End Prostate Cancer Relapse Prediction from
H&E Images Combining Self-Attention Multiple Instance Learning with a
Recurrent Neural Network [0.0]
We propose an explainable cancer relapse prediction network (eCaReNet) and show that end-to-end learning without strong annotations offers state-of-the-art performance.
Our model is well-calibrated and outputs survival curves as well as a risk score and group per patient.
arXiv Detail & Related papers (2021-11-26T11:45:08Z) - Scribble-Supervised Semantic Segmentation by Uncertainty Reduction on
Neural Representation and Self-Supervision on Neural Eigenspace [21.321005898976253]
Scribble-supervised semantic segmentation has gained much attention recently for its promising performance without high-quality annotations.
This work aims to achieve semantic segmentation by scribble annotations directly without extra information and other limitations.
We propose holistic operations, including minimizing entropy and a network embedded random walk on neural representation to reduce uncertainty.
arXiv Detail & Related papers (2021-02-19T12:33:57Z) - Weakly Supervised Vessel Segmentation in X-ray Angiograms by Self-Paced
Learning from Noisy Labels with Suggestive Annotation [12.772031281511023]
We propose a weakly supervised training framework that learns from noisy pseudo labels generated from automatic vessel enhancement.
A typical self-paced learning scheme is used to make the training process robust against label noise.
We show that our proposed framework achieves comparable accuracy to fully supervised learning.
arXiv Detail & Related papers (2020-05-27T13:55:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.