Self-supervised driven consistency training for annotation efficient
histopathology image analysis
- URL: http://arxiv.org/abs/2102.03897v2
- Date: Tue, 9 Feb 2021 23:26:44 GMT
- Title: Self-supervised driven consistency training for annotation efficient
histopathology image analysis
- Authors: Chetan L. Srinidhi, Seung Wook Kim, Fu-Der Chen, Anne L. Martel
- Abstract summary: Training a neural network with a large labeled dataset is still a dominant paradigm in computational histopathology.
We propose a self-supervised pretext task that harnesses the underlying multi-resolution contextual cues in histology whole-slide images to learn a powerful supervisory signal for unsupervised representation learning.
We also propose a new teacher-student semi-supervised consistency paradigm that learns to effectively transfer the pretrained representations to downstream tasks based on prediction consistency with the task-specific un-labeled data.
- Score: 13.005873872821066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training a neural network with a large labeled dataset is still a dominant
paradigm in computational histopathology. However, obtaining such exhaustive
manual annotations is often expensive, laborious, and prone to inter and
Intra-observer variability. While recent self-supervised and semi-supervised
methods can alleviate this need by learn-ing unsupervised feature
representations, they still struggle to generalize well to downstream tasks
when the number of labeled instances is small. In this work, we overcome this
challenge by leveraging both task-agnostic and task-specific unlabeled data
based on two novel strategies: i) a self-supervised pretext task that harnesses
the underlying multi-resolution contextual cues in histology whole-slide images
to learn a powerful supervisory signal for unsupervised representation
learning; ii) a new teacher-student semi-supervised consistency paradigm that
learns to effectively transfer the pretrained representations to downstream
tasks based on prediction consistency with the task-specific un-labeled data.
We carry out extensive validation experiments on three histopathology benchmark
datasets across two classification and one regression-based tasks, i.e., tumor
metastasis detection, tissue type classification, and tumor cellularity
quantification. Under limited-label data, the proposed method yields tangible
improvements, which is close or even outperforming other state-of-the-art
self-supervised and supervised baselines. Furthermore, we empirically show that
the idea of bootstrapping the self-supervised pretrained features is an
effective way to improve the task-specific semi-supervised learning on standard
benchmarks. Code and pretrained models will be made available at:
https://github.com/srinidhiPY/SSL_CR_Histo
Related papers
- Enhancing Hyperspectral Image Prediction with Contrastive Learning in Low-Label Regime [0.810304644344495]
Self-supervised contrastive learning is an effective approach for addressing the challenge of limited labelled data.
We evaluate the method's performance for both the single-label and multi-label classification tasks.
arXiv Detail & Related papers (2024-10-10T10:20:16Z) - Consistency-Based Semi-supervised Evidential Active Learning for
Diagnostic Radiograph Classification [2.3545156585418328]
We introduce a novel Consistency-based Semi-supervised Evidential Active Learning framework (CSEAL)
We leverage predictive uncertainty based on theories of evidence and subjective logic to develop an end-to-end integrated approach.
Our approach can substantially improve accuracy on rarer abnormalities with fewer labelled samples.
arXiv Detail & Related papers (2022-09-05T09:28:31Z) - Leveraging Ensembles and Self-Supervised Learning for Fully-Unsupervised
Person Re-Identification and Text Authorship Attribution [77.85461690214551]
Learning from fully-unlabeled data is challenging in Multimedia Forensics problems, such as Person Re-Identification and Text Authorship Attribution.
Recent self-supervised learning methods have shown to be effective when dealing with fully-unlabeled data in cases where the underlying classes have significant semantic differences.
We propose a strategy to tackle Person Re-Identification and Text Authorship Attribution by enabling learning from unlabeled data even when samples from different classes are not prominently diverse.
arXiv Detail & Related papers (2022-02-07T13:08:11Z) - Boosting Supervised Learning Performance with Co-training [15.986635379046602]
We propose a new light-weight self-supervised learning framework that could boost supervised learning performance with minimum additional cost.
Our results show that both self-supervised tasks can improve the accuracy of the supervised task and, at the same time, demonstrates strong domain adaption capability.
arXiv Detail & Related papers (2021-11-18T17:01:17Z) - A Histopathology Study Comparing Contrastive Semi-Supervised and Fully
Supervised Learning [0.0]
We explore self-supervised learning to reduce labeling burdens in computational pathology.
We find that ImageNet pre-trained networks largely outperform the self-supervised representations obtained using Barlow Twins.
arXiv Detail & Related papers (2021-11-10T19:04:08Z) - WSSOD: A New Pipeline for Weakly- and Semi-Supervised Object Detection [75.80075054706079]
We propose a weakly- and semi-supervised object detection framework (WSSOD)
An agent detector is first trained on a joint dataset and then used to predict pseudo bounding boxes on weakly-annotated images.
The proposed framework demonstrates remarkable performance on PASCAL-VOC and MSCOCO benchmark, achieving a high performance comparable to those obtained in fully-supervised settings.
arXiv Detail & Related papers (2021-05-21T11:58:50Z) - Meta-learning One-class Classifiers with Eigenvalue Solvers for
Supervised Anomaly Detection [55.888835686183995]
We propose a neural network-based meta-learning method for supervised anomaly detection.
We experimentally demonstrate that the proposed method achieves better performance than existing anomaly detection and few-shot learning methods.
arXiv Detail & Related papers (2021-03-01T01:43:04Z) - Deep Semi-supervised Knowledge Distillation for Overlapping Cervical
Cell Instance Segmentation [54.49894381464853]
We propose to leverage both labeled and unlabeled data for instance segmentation with improved accuracy by knowledge distillation.
We propose a novel Mask-guided Mean Teacher framework with Perturbation-sensitive Sample Mining.
Experiments show that the proposed method improves the performance significantly compared with the supervised method learned from labeled data only.
arXiv Detail & Related papers (2020-07-21T13:27:09Z) - A Trainable Optimal Transport Embedding for Feature Aggregation and its
Relationship to Attention [96.77554122595578]
We introduce a parametrized representation of fixed size, which embeds and then aggregates elements from a given input set according to the optimal transport plan between the set and a trainable reference.
Our approach scales to large datasets and allows end-to-end training of the reference, while also providing a simple unsupervised learning mechanism with small computational cost.
arXiv Detail & Related papers (2020-06-22T08:35:58Z) - Self-Supervised Prototypical Transfer Learning for Few-Shot
Classification [11.96734018295146]
Self-supervised transfer learning approach ProtoTransfer outperforms state-of-the-art unsupervised meta-learning methods on few-shot tasks.
In few-shot experiments with domain shift, our approach even has comparable performance to supervised methods, but requires orders of magnitude fewer labels.
arXiv Detail & Related papers (2020-06-19T19:00:11Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.