Robust Interactive Semantic Segmentation of Pathology Images with
Minimal User Input
- URL: http://arxiv.org/abs/2108.13368v1
- Date: Mon, 30 Aug 2021 16:43:03 GMT
- Title: Robust Interactive Semantic Segmentation of Pathology Images with
Minimal User Input
- Authors: Mostafa Jahanifar, Neda Zamani Tajeddin, Navid Alemi Koohbanani and
Nasir Rajpoot
- Abstract summary: We propose an efficient interactive segmentation network that requires minimum input from the user to accurately annotate different tissue types in the histology image.
We show that not only does our proposed method speed up the interactive annotation process, it can also outperform the existing automatic and interactive region segmentation models.
- Score: 1.5328185694137677
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: From the simple measurement of tissue attributes in pathology workflow to
designing an explainable diagnostic/prognostic AI tool, access to accurate
semantic segmentation of tissue regions in histology images is a prerequisite.
However, delineating different tissue regions manually is a laborious,
time-consuming and costly task that requires expert knowledge. On the other
hand, the state-of-the-art automatic deep learning models for semantic
segmentation require lots of annotated training data and there are only a
limited number of tissue region annotated images publicly available. To obviate
this issue in computational pathology projects and collect large-scale region
annotations efficiently, we propose an efficient interactive segmentation
network that requires minimum input from the user to accurately annotate
different tissue types in the histology image. The user is only required to
draw a simple squiggle inside each region of interest so it will be used as the
guiding signal for the model. To deal with the complex appearance and amorph
geometry of different tissue regions we introduce several automatic and
minimalistic guiding signal generation techniques that help the model to become
robust against the variation in the user input. By experimenting on a dataset
of breast cancer images, we show that not only does our proposed method speed
up the interactive annotation process, it can also outperform the existing
automatic and interactive region segmentation models.
Related papers
- Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - Unsupervised Segmentation of Fetal Brain MRI using Deep Learning
Cascaded Registration [2.494736313545503]
Traditional deep learning-based automatic segmentation requires extensive training data with ground-truth labels.
We propose a novel method based on multi-atlas segmentation, that accurately segments multiple tissues without relying on labeled data for training.
Our method employs a cascaded deep learning network for 3D image registration, which computes small, incremental deformations to the moving image to align it precisely with the fixed image.
arXiv Detail & Related papers (2023-07-07T13:17:12Z) - CGAM: Click-Guided Attention Module for Interactive Pathology Image
Segmentation via Backpropagating Refinement [8.590026259176806]
Tumor region segmentation is an essential task for the quantitative analysis of digital pathology.
Recent deep neural networks have shown state-of-the-art performance in various image-segmentation tasks.
We propose an interactive segmentation method that allows users to refine the output of deep neural networks through click-type user interactions.
arXiv Detail & Related papers (2023-07-03T13:45:24Z) - Weakly supervised segmentation with point annotations for histopathology
images via contrast-based variational model [7.021021047695508]
We propose a contrast-based variational model to generate segmentation results for histopathology images.
The proposed method considers the common characteristics of target regions in histopathology images and can be trained in an end-to-end manner.
It can generate more regionally consistent and smoother boundary segmentation, and is more robust to unlabeled novel' regions.
arXiv Detail & Related papers (2023-04-07T10:12:21Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - Valuing Vicinity: Memory attention framework for context-based semantic
segmentation in histopathology [0.8866112270350612]
The identification of detailed types of tissue is crucial for providing personalized cancer therapies.
We propose a patch neighbour attention mechanism to query the neighbouring tissue context from a patch embedding memory bank.
Our memory attention framework (MAF) mimics a pathologist's annotation procedure -- zooming out and considering surrounding tissue context.
arXiv Detail & Related papers (2022-10-21T08:49:30Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Graph Neural Networks for UnsupervisedDomain Adaptation of
Histopathological ImageAnalytics [22.04114134677181]
We present a novel method for the unsupervised domain adaptation for histological image analysis.
It is based on a backbone for embedding images into a feature space, and a graph neural layer for propa-gating the supervision signals of images with labels.
In experiments, our methodachieves state-of-the-art performance on four public datasets.
arXiv Detail & Related papers (2020-08-21T04:53:44Z) - Towards Unsupervised Learning for Instrument Segmentation in Robotic
Surgery with Cycle-Consistent Adversarial Networks [54.00217496410142]
We propose an unpaired image-to-image translation where the goal is to learn the mapping between an input endoscopic image and a corresponding annotation.
Our approach allows to train image segmentation models without the need to acquire expensive annotations.
We test our proposed method on Endovis 2017 challenge dataset and show that it is competitive with supervised segmentation methods.
arXiv Detail & Related papers (2020-07-09T01:39:39Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.