Improving Stain Invariance of CNNs for Segmentation by Fusing Channel
Attention and Domain-Adversarial Training
- URL: http://arxiv.org/abs/2304.11445v1
- Date: Sat, 22 Apr 2023 16:54:37 GMT
- Title: Improving Stain Invariance of CNNs for Segmentation by Fusing Channel
Attention and Domain-Adversarial Training
- Authors: Kudaibergen Abutalip, Numan Saeed, Mustaqeem Khan, Abdulmotaleb El
Saddik
- Abstract summary: Variability in staining protocols, such as different slide preparation techniques, chemicals, and scanner configurations, can result in a diverse set of whole slide images (WSIs)
This distribution shift can negatively impact the performance of deep learning models on unseen samples.
We propose a method for improving the generalizability of convolutional neural networks (CNNs) to stain changes in a single-source setting for semantic segmentation.
- Score: 5.501810688265425
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Variability in staining protocols, such as different slide preparation
techniques, chemicals, and scanner configurations, can result in a diverse set
of whole slide images (WSIs). This distribution shift can negatively impact the
performance of deep learning models on unseen samples, presenting a significant
challenge for developing new computational pathology applications. In this
study, we propose a method for improving the generalizability of convolutional
neural networks (CNNs) to stain changes in a single-source setting for semantic
segmentation. Recent studies indicate that style features mainly exist as
covariances in earlier network layers. We design a channel attention mechanism
based on these findings that detects stain-specific features and modify the
previously proposed stain-invariant training scheme. We reweigh the outputs of
earlier layers and pass them to the stain-adversarial training branch. We
evaluate our method on multi-center, multi-stain datasets and demonstrate its
effectiveness through interpretability analysis. Our approach achieves
substantial improvements over baselines and competitive performance compared to
other methods, as measured by various evaluation metrics. We also show that
combining our method with stain augmentation leads to mutually beneficial
results and outperforms other techniques. Overall, our study makes significant
contributions to the field of computational pathology.
Related papers
- Contrastive-Adversarial and Diffusion: Exploring pre-training and fine-tuning strategies for sulcal identification [3.0398616939692777]
Techniques like adversarial learning, contrastive learning, diffusion denoising learning, and ordinary reconstruction learning have become standard.
The study aims to elucidate the advantages of pre-training techniques and fine-tuning strategies to enhance the learning process of neural networks.
arXiv Detail & Related papers (2024-05-29T15:44:51Z) - On the Trade-off of Intra-/Inter-class Diversity for Supervised
Pre-training [72.8087629914444]
We study the impact of the trade-off between the intra-class diversity (the number of samples per class) and the inter-class diversity (the number of classes) of a supervised pre-training dataset.
With the size of the pre-training dataset fixed, the best downstream performance comes with a balance on the intra-/inter-class diversity.
arXiv Detail & Related papers (2023-05-20T16:23:50Z) - Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - Stain-invariant self supervised learning for histopathology image
analysis [74.98663573628743]
We present a self-supervised algorithm for several classification tasks within hematoxylin and eosin stained images of breast cancer.
Our method achieves the state-of-the-art performance on several publicly available breast cancer datasets.
arXiv Detail & Related papers (2022-11-14T18:16:36Z) - Demystify Transformers & Convolutions in Modern Image Deep Networks [82.32018252867277]
This paper aims to identify the real gains of popular convolution and attention operators through a detailed study.
We find that the key difference among these feature transformation modules, such as attention or convolution, lies in their spatial feature aggregation approach.
Our experiments on various tasks and an analysis of inductive bias show a significant performance boost due to advanced network-level and block-level designs.
arXiv Detail & Related papers (2022-11-10T18:59:43Z) - Stain-Adaptive Self-Supervised Learning for Histopathology Image
Analysis [3.8073142980733]
We propose a novel Stain-Adaptive Self-Supervised Learning(SASSL) method for histopathology image analysis.
Our SASSL integrates a domain-adversarial training module into the SSL framework to learn distinctive features that are robust to both various transformations and stain variations.
Experimental results demonstrate that the proposed method can robustly improve the feature extraction ability of the model.
arXiv Detail & Related papers (2022-08-08T09:54:46Z) - Unsupervised Domain Adaptation Using Feature Disentanglement And GCNs
For Medical Image Classification [5.6512908295414]
We propose an unsupervised domain adaptation approach that uses graph neural networks and, disentangled semantic and domain invariant structural features.
We test the proposed method for classification on two challenging medical image datasets with distribution shifts.
Experiments show our method achieves state-of-the-art results compared to other domain adaptation methods.
arXiv Detail & Related papers (2022-06-27T09:02:16Z) - Revisiting Consistency Regularization for Semi-Supervised Learning [80.28461584135967]
We propose an improved consistency regularization framework by a simple yet effective technique, FeatDistLoss.
Experimental results show that our model defines a new state of the art for various datasets and settings.
arXiv Detail & Related papers (2021-12-10T20:46:13Z) - Analyzing Overfitting under Class Imbalance in Neural Networks for Image
Segmentation [19.259574003403998]
In image segmentation neural networks may overfit to the foreground samples from small structures.
In this study, we provide new insights on the problem of overfitting under class imbalance by inspecting the network behavior.
arXiv Detail & Related papers (2021-02-20T14:57:58Z) - On Robustness and Transferability of Convolutional Neural Networks [147.71743081671508]
Modern deep convolutional networks (CNNs) are often criticized for not generalizing under distributional shifts.
We study the interplay between out-of-distribution and transfer performance of modern image classification CNNs for the first time.
We find that increasing both the training set and model sizes significantly improve the distributional shift robustness.
arXiv Detail & Related papers (2020-07-16T18:39:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.