On Smart Gaze based Annotation of Histopathology Images for Training of
Deep Convolutional Neural Networks
- URL: http://arxiv.org/abs/2202.02764v1
- Date: Sun, 6 Feb 2022 12:07:12 GMT
- Title: On Smart Gaze based Annotation of Histopathology Images for Training of
Deep Convolutional Neural Networks
- Authors: Komal Mariam, Osama Mohammed Afzal, Wajahat Hussain, Muhammad Umar
Javed, Amber Kiyani, Nasir Rajpoot, Syed Ali Khurram and Hassan Aqeel Khan
- Abstract summary: Eye gaze annotations have the potential to speed up the slide labeling process.
We compare the performance gap between deep object detectors trained using hand-labelled and gaze-labelled data.
- Score: 1.9642257301321773
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unavailability of large training datasets is a bottleneck that needs to be
overcome to realize the true potential of deep learning in histopathology
applications. Although slide digitization via whole slide imaging scanners has
increased the speed of data acquisition, labeling of virtual slides requires a
substantial time investment from pathologists. Eye gaze annotations have the
potential to speed up the slide labeling process. This work explores the
viability and timing comparisons of eye gaze labeling compared to conventional
manual labeling for training object detectors. Challenges associated with gaze
based labeling and methods to refine the coarse data annotations for subsequent
object detection are also discussed. Results demonstrate that gaze tracking
based labeling can save valuable pathologist time and delivers good performance
when employed for training a deep object detector. Using the task of
localization of Keratin Pearls in cases of oral squamous cell carcinoma as a
test case, we compare the performance gap between deep object detectors trained
using hand-labelled and gaze-labelled data. On average, compared to
`Bounding-box' based hand-labeling, gaze-labeling required $57.6\%$ less time
per label and compared to `Freehand' labeling, gaze-labeling required on
average $85\%$ less time per label.
Related papers
- A Review of Pseudo-Labeling for Computer Vision [2.79239659248295]
Deep neural networks often require large datasets of labeled samples to generalize effectively.
An important area of active research is semi-supervised learning, which attempts to instead utilize large quantities of (easily acquired) unlabeled samples.
In this work we explore a broader interpretation of pseudo-labels within both self-supervised and unsupervised methods.
arXiv Detail & Related papers (2024-08-13T22:17:48Z) - Context Matters: Leveraging Spatiotemporal Metadata for Semi-Supervised Learning on Remote Sensing Images [2.518656729567209]
Current approaches generate pseudo-labels from model predictions for unlabeled samples.
We propose exploiting totemporal metainformation in SSL to improve the quality of pseudo-labels.
We show that adding the available metadata to the input of the predictor at test time degenerates the prediction quality for metadata outside thetemporal distribution of the training set.
arXiv Detail & Related papers (2024-04-29T10:47:37Z) - Weakly Semi-supervised Tool Detection in Minimally Invasive Surgery
Videos [11.61305113932032]
Surgical tool detection is essential for analyzing and evaluating minimally invasive surgery videos.
Large image datasets with instance-level labels are often limited because of the burden of annotation.
In this work, we propose to strike a balance between the extremely costly annotation burden and detection performance.
arXiv Detail & Related papers (2024-01-05T13:05:02Z) - Virtual Category Learning: A Semi-Supervised Learning Method for Dense
Prediction with Extremely Limited Labels [63.16824565919966]
This paper proposes to use confusing samples proactively without label correction.
A Virtual Category (VC) is assigned to each confusing sample in such a way that it can safely contribute to the model optimisation.
Our intriguing findings highlight the usage of VC learning in dense vision tasks.
arXiv Detail & Related papers (2023-12-02T16:23:52Z) - Human-machine Interactive Tissue Prototype Learning for Label-efficient
Histopathology Image Segmentation [18.755759024796216]
Deep neural networks have greatly advanced histopathology image segmentation but usually require abundant data.
We present a label-efficient tissue prototype dictionary building pipeline and propose to use the obtained prototypes to guide histopathology image segmentation.
We show that our human-machine interactive tissue prototype learning method can achieve comparable segmentation performance as the fully-supervised baselines.
arXiv Detail & Related papers (2022-11-26T06:17:21Z) - Semi-supervised Object Detection via Virtual Category Learning [68.26956850996976]
This paper proposes to use confusing samples proactively without label correction.
Specifically, a virtual category (VC) is assigned to each confusing sample.
It is attributed to specifying the embedding distance between the training sample and the virtual category.
arXiv Detail & Related papers (2022-07-07T16:59:53Z) - Self-Supervised Learning as a Means To Reduce the Need for Labeled Data
in Medical Image Analysis [64.4093648042484]
We use a dataset of chest X-ray images with bounding box labels for 13 different classes of anomalies.
We show that it is possible to achieve similar performance to a fully supervised model in terms of mean average precision and accuracy with only 60% of the labeled data.
arXiv Detail & Related papers (2022-06-01T09:20:30Z) - Debiased Pseudo Labeling in Self-Training [77.83549261035277]
Deep neural networks achieve remarkable performances on a wide range of tasks with the aid of large-scale labeled datasets.
To mitigate the requirement for labeled data, self-training is widely used in both academia and industry by pseudo labeling on readily-available unlabeled data.
We propose Debiased, in which the generation and utilization of pseudo labels are decoupled by two independent heads.
arXiv Detail & Related papers (2022-02-15T02:14:33Z) - A Histopathology Study Comparing Contrastive Semi-Supervised and Fully
Supervised Learning [0.0]
We explore self-supervised learning to reduce labeling burdens in computational pathology.
We find that ImageNet pre-trained networks largely outperform the self-supervised representations obtained using Barlow Twins.
arXiv Detail & Related papers (2021-11-10T19:04:08Z) - Learning to Aggregate and Refine Noisy Labels for Visual Sentiment
Analysis [69.48582264712854]
We propose a robust learning method to perform robust visual sentiment analysis.
Our method relies on an external memory to aggregate and filter noisy labels during training.
We establish a benchmark for visual sentiment analysis with label noise using publicly available datasets.
arXiv Detail & Related papers (2021-09-15T18:18:28Z) - Weakly-Supervised Salient Object Detection via Scribble Annotations [54.40518383782725]
We propose a weakly-supervised salient object detection model to learn saliency from scribble labels.
We present a new metric, termed saliency structure measure, to measure the structure alignment of the predicted saliency maps.
Our method not only outperforms existing weakly-supervised/unsupervised methods, but also is on par with several fully-supervised state-of-the-art models.
arXiv Detail & Related papers (2020-03-17T12:59:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.