MorphGen: Morphology-Guided Representation Learning for Robust Single-Domain Generalization in Histopathological Cancer Classification
- URL: http://arxiv.org/abs/2509.00311v1
- Date: Sat, 30 Aug 2025 01:59:19 GMT
- Title: MorphGen: Morphology-Guided Representation Learning for Robust Single-Domain Generalization in Histopathological Cancer Classification
- Authors: Hikmat Khan, Syed Farhan Alam Zaidi, Pir Masoom Shah, Kiruthika Balakrishnan, Rabia Khan, Muhammad Waqas, Jia Wu,
- Abstract summary: Domain generalization in computational histopathology is hindered by heterogeneity in whole slide images.<n>We propose MorphGen, a method that integrates histopathology images, augmentations, and nuclear segmentation masks.<n>We demonstrate resilience of the learned representations to image corruptions (such as staining artifacts) and adversarial attacks.
- Score: 7.220226391639059
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Domain generalization in computational histopathology is hindered by heterogeneity in whole slide images (WSIs), caused by variations in tissue preparation, staining, and imaging conditions across institutions. Unlike machine learning systems, pathologists rely on domain-invariant morphological cues such as nuclear atypia (enlargement, irregular contours, hyperchromasia, chromatin texture, spatial disorganization), structural atypia (abnormal architecture and gland formation), and overall morphological atypia that remain diagnostic across diverse settings. Motivated by this, we hypothesize that explicitly modeling biologically robust nuclear morphology and spatial organization will enable the learning of cancer representations that are resilient to domain shifts. We propose MorphGen (Morphology-Guided Generalization), a method that integrates histopathology images, augmentations, and nuclear segmentation masks within a supervised contrastive learning framework. By aligning latent representations of images and nuclear masks, MorphGen prioritizes diagnostic features such as nuclear and morphological atypia and spatial organization over staining artifacts and domain-specific features. To further enhance out-of-distribution robustness, we incorporate stochastic weight averaging (SWA), steering optimization toward flatter minima. Attention map analyses revealed that MorphGen primarily relies on nuclear morphology, cellular composition, and spatial cell organization within tumors or normal regions for final classification. Finally, we demonstrate resilience of the learned representations to image corruptions (such as staining artifacts) and adversarial attacks, showcasing not only OOD generalization but also addressing critical vulnerabilities in current deep learning systems for digital pathology. Code, datasets, and trained models are available at: https://github.com/hikmatkhan/MorphGen
Related papers
- Fusing Pixels and Genes: Spatially-Aware Learning in Computational Pathology [46.83014413674925]
STAMP is a spatial transcriptomics-augmented multimodal pathology representation learning framework.<n>Our study shows that self-supervised, gene-guided training provides a robust and task-agnostic signal for learning pathology image representations.<n>We validate STAMP across six datasets and four downstream tasks, where it consistently achieves strong performance.
arXiv Detail & Related papers (2026-02-15T00:59:13Z) - A Semantically Enhanced Generative Foundation Model Improves Pathological Image Synthesis [82.01597026329158]
We introduce a Correlation-Regulated Alignment Framework for Tissue Synthesis (CRAFTS) for pathology-specific text-to-image synthesis.<n>CRAFTS incorporates a novel alignment mechanism that suppresses semantic drift to ensure biological accuracy.<n>This model generates diverse pathological images spanning 30 cancer types, with quality rigorously validated by objective metrics and pathologist evaluations.
arXiv Detail & Related papers (2025-12-15T10:22:43Z) - Integrating Multi-scale and Multi-filtration Topological Features for Medical Image Classification [20.820287362872975]
Deep neural networks have shown remarkable performance in medical image classification.<n>We propose a new topology-guided classification framework that extracts multi-scale and multi-filtration persistent topological features.<n>Our approach enhances the model's capacity to recognize complex anatomical structures.
arXiv Detail & Related papers (2025-12-08T06:02:02Z) - AdvDINO: Domain-Adversarial Self-Supervised Representation Learning for Spatial Proteomics [0.42855555838080833]
Self-supervised learning (SSL) has emerged as a powerful approach for learning visual representations without manual annotations.<n>We present AdvDINO, a domain-adversarial self-supervised learning framework that integrates a gradient reversal layer into the DINOv2 architecture to promote domain-invariant feature learning.
arXiv Detail & Related papers (2025-08-07T00:51:54Z) - Multi-Scale Representation of Follicular Lymphoma Pathology Images in a Single Hyperbolic Space [16.779755785147195]
We propose a method for representing malignant lymphoma pathology images using self-supervised learning.<n>To capture morphological changes that occur across scales during disease progression, our approach embeds tissue and corresponding nucleus images close to each other.
arXiv Detail & Related papers (2025-06-23T11:25:55Z) - MIRROR: Multi-Modal Pathological Self-Supervised Representation Learning via Modality Alignment and Retention [52.106879463828044]
Histopathology and transcriptomics are fundamental modalities in oncology, encapsulating the morphological and molecular aspects of the disease.<n>We present MIRROR, a novel multi-modal representation learning method designed to foster both modality alignment and retention.<n>Extensive evaluations on TCGA cohorts for cancer subtyping and survival analysis highlight MIRROR's superior performance.
arXiv Detail & Related papers (2025-03-01T07:02:30Z) - Progressive Retinal Image Registration via Global and Local Deformable Transformations [49.032894312826244]
We propose a hybrid registration framework called HybridRetina.
We use a keypoint detector and a deformation network called GAMorph to estimate the global transformation and local deformable transformation.
Experiments on two widely-used datasets, FIRE and FLoRI21, show that our proposed HybridRetina significantly outperforms some state-of-the-art methods.
arXiv Detail & Related papers (2024-09-02T08:43:50Z) - FMDNN: A Fuzzy-guided Multi-granular Deep Neural Network for Histopathological Image Classification [40.94024666952439]
We propose the Fuzzy-guided Multi-granularity Deep Neural Network (FMDNN)
Inspired by the multi-granular diagnostic approach of pathologists, we perform feature extraction on cell structures at coarse, medium, and fine granularity.
A fuzzy-guided cross-attention module guides universal fuzzy features toward multi-granular features.
arXiv Detail & Related papers (2024-07-22T00:46:15Z) - Revisiting Adaptive Cellular Recognition Under Domain Shifts: A Contextual Correspondence View [49.03501451546763]
We identify the importance of implicit correspondences across biological contexts for exploiting domain-invariant pathological composition.
We propose self-adaptive dynamic distillation to secure instance-aware trade-offs across different model constituents.
arXiv Detail & Related papers (2024-07-14T04:41:16Z) - NASDM: Nuclei-Aware Semantic Histopathology Image Generation Using
Diffusion Models [3.2996723916635267]
First-of-its-kind nuclei-aware semantic tissue generation framework (NASDM)
NASDM can synthesize realistic tissue samples given a semantic instance mask of up to six different nuclei types.
These synthetic images are useful in applications in pathology, validation of models, and supplementation of existing nuclei segmentation datasets.
arXiv Detail & Related papers (2023-03-20T22:16:03Z) - Self-Supervised Vision Transformers Learn Visual Concepts in
Histopathology [5.164102666113966]
We conduct a search for good representations in pathology by training a variety of self-supervised models with validation on a variety of weakly-supervised and patch-level tasks.
Our key finding is in discovering that Vision Transformers using DINO-based knowledge distillation are able to learn data-efficient and interpretable features in histology images.
arXiv Detail & Related papers (2022-03-01T16:14:41Z) - Learning domain-agnostic visual representation for computational
pathology using medically-irrelevant style transfer augmentation [4.538771844947821]
STRAP (Style TRansfer Augmentation for histoPathology) is a form of data augmentation based on random style transfer from artistic paintings.
Style transfer replaces the low-level texture content of images with the uninformative style of randomly selected artistic paintings.
We demonstrate that STRAP leads to state-of-the-art performance, particularly in the presence of domain shifts.
arXiv Detail & Related papers (2021-02-02T18:50:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.