Learning Nuclei Representations with Masked Image Modelling
- URL: http://arxiv.org/abs/2306.17116v1
- Date: Thu, 29 Jun 2023 17:20:05 GMT
- Title: Learning Nuclei Representations with Masked Image Modelling
- Authors: Piotr W\'ojcik, Hussein Naji, Adrian Simon, Reinhard B\"uttner,
Katarzyna Bo\.zek
- Abstract summary: Masked image modelling (MIM) is a powerful self-supervised representation learning paradigm.
We show the capacity of MIM to capture rich semantic representations of Haemotoxylin & Eosin (H&E)-stained images at the nuclear level.
- Score: 0.41998444721319206
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Masked image modelling (MIM) is a powerful self-supervised representation
learning paradigm, whose potential has not been widely demonstrated in medical
image analysis. In this work, we show the capacity of MIM to capture rich
semantic representations of Haemotoxylin & Eosin (H&E)-stained images at the
nuclear level. Inspired by Bidirectional Encoder representation from Image
Transformers (BEiT), we split the images into smaller patches and generate
corresponding discrete visual tokens. In addition to the regular grid-based
patches, typically used in visual Transformers, we introduce patches of
individual cell nuclei. We propose positional encoding of the irregular
distribution of these structures within an image. We pre-train the model in a
self-supervised manner on H&E-stained whole-slide images of diffuse large
B-cell lymphoma, where cell nuclei have been segmented. The pre-training
objective is to recover the original discrete visual tokens of the masked image
on the one hand, and to reconstruct the visual tokens of the masked object
instances on the other. Coupling these two pre-training tasks allows us to
build powerful, context-aware representations of nuclei. Our model generalizes
well and can be fine-tuned on downstream classification tasks, achieving
improved cell classification accuracy on PanNuke dataset by more than 5%
compared to current instance segmentation methods.
Related papers
- Masked Image Modeling Boosting Semi-Supervised Semantic Segmentation [38.55611683982936]
We introduce a novel class-wise masked image modeling that independently reconstructs different image regions according to their respective classes.
We develop a feature aggregation strategy that minimizes the distances between features corresponding to the masked and visible parts within the same class.
In semantic space, we explore the application of masked image modeling to enhance regularization.
arXiv Detail & Related papers (2024-11-13T16:42:07Z) - Pre-training with Random Orthogonal Projection Image Modeling [32.667183132025094]
Masked Image Modeling (MIM) is a powerful self-supervised strategy for visual pre-training without the use of labels.
We propose an Image Modeling framework based on Random Orthogonal Projection Image Modeling (ROPIM)
ROPIM reduces spatially-wise token information under guaranteed bound on the noise variance and can be considered as masking entire spatial image area under locally varying masking degrees.
arXiv Detail & Related papers (2023-10-28T15:42:07Z) - Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - Not All Image Regions Matter: Masked Vector Quantization for
Autoregressive Image Generation [78.13793505707952]
Existing autoregressive models follow the two-stage generation paradigm that first learns a codebook in the latent space for image reconstruction and then completes the image generation autoregressively based on the learned codebook.
We propose a novel two-stage framework, which consists of Masked Quantization VAE (MQ-VAE) Stack model from modeling redundancy.
arXiv Detail & Related papers (2023-05-23T02:15:53Z) - Cross-modal tumor segmentation using generative blending augmentation and self training [1.6440045168835438]
We propose a cross-modal segmentation method based on conventional image synthesis boosted by a new data augmentation technique.
Generative Blending Augmentation (GBA) learns representative generative features from a single training image to realistically diversify tumor appearances.
The proposed solution ranked first for vestibular schwannoma (VS) segmentation during the validation and test phases of the MICCAI CrossMoDA 2022 challenge.
arXiv Detail & Related papers (2023-04-04T11:01:46Z) - Improving Masked Autoencoders by Learning Where to Mask [65.89510231743692]
Masked image modeling is a promising self-supervised learning method for visual data.
We present AutoMAE, a framework that uses Gumbel-Softmax to interlink an adversarially-trained mask generator and a mask-guided image modeling process.
In our experiments, AutoMAE is shown to provide effective pretraining models on standard self-supervised benchmarks and downstream tasks.
arXiv Detail & Related papers (2023-03-12T05:28:55Z) - The Devil is in the Frequency: Geminated Gestalt Autoencoder for
Self-Supervised Visual Pre-Training [13.087987450384036]
We present a new Masked Image Modeling (MIM), termed Geminated Autoencoder (Ge$2$-AE) for visual pre-training.
Specifically, we equip our model with geminated decoders in charge of reconstructing image contents from both pixel and frequency space.
arXiv Detail & Related papers (2022-04-18T09:22:55Z) - Corrupted Image Modeling for Self-Supervised Visual Pre-Training [103.99311611776697]
We introduce Corrupted Image Modeling (CIM) for self-supervised visual pre-training.
CIM uses an auxiliary generator with a small trainable BEiT to corrupt the input image instead of using artificial mask tokens.
After pre-training, the enhancer can be used as a high-capacity visual encoder for downstream tasks.
arXiv Detail & Related papers (2022-02-07T17:59:04Z) - Less is More: Pay Less Attention in Vision Transformers [61.05787583247392]
Less attention vIsion Transformer builds upon the fact that convolutions, fully-connected layers, and self-attentions have almost equivalent mathematical expressions for processing image patch sequences.
The proposed LIT achieves promising performance on image recognition tasks, including image classification, object detection and instance segmentation.
arXiv Detail & Related papers (2021-05-29T05:26:07Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.