PRINTER:Deformation-Aware Adversarial Learning for Virtual IHC Staining with In Situ Fidelity
- URL: http://arxiv.org/abs/2509.01214v1
- Date: Mon, 01 Sep 2025 07:53:05 GMT
- Title: PRINTER:Deformation-Aware Adversarial Learning for Virtual IHC Staining with In Situ Fidelity
- Authors: Yizhe Yuan, Bingsen Xue, Bangzheng Pu, Chengxiang Wang, Cheng Jin,
- Abstract summary: PRINTER is a weakly-supervised framework that integrates PRototype-drIven content and staiNing patTERn decoupling and deformation-aware adversarial learning strategies.<n>Our work provides a robust and scalable solution for virtual staining, advancing the field of computational pathology.
- Score: 18.922782766983378
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tumor spatial heterogeneity analysis requires precise correlation between Hematoxylin and Eosin H&E morphology and immunohistochemical (IHC) biomarker expression, yet current methods suffer from spatial misalignment in consecutive sections, severely compromising in situ pathological interpretation. In order to obtain a more accurate virtual staining pattern, We propose PRINTER, a weakly-supervised framework that integrates PRototype-drIven content and staiNing patTERn decoupling and deformation-aware adversarial learning strategies designed to accurately learn IHC staining patterns while preserving H&E staining details. Our approach introduces three key innovations: (1) A prototype-driven staining pattern transfer with explicit content-style decoupling; and (2) A cyclic registration-synthesis framework GapBridge that bridges H&E and IHC domains through deformable structural alignment, where registered features guide cross-modal style transfer while synthesized outputs iteratively refine the registration;(3) Deformation-Aware Adversarial Learning: We propose a training framework where a generator and deformation-aware registration network jointly adversarially optimize a style-focused discriminator. Extensive experiments demonstrate that PRINTER effectively achieves superior performance in preserving H&E staining details and virtual staining fidelity, outperforming state-of-the-art methods. Our work provides a robust and scalable solution for virtual staining, advancing the field of computational pathology.
Related papers
- PAINT: Pathology-Aware Integrated Next-Scale Transformation for Virtual Immunohistochemistry [17.230315436967356]
Virtualchemistry aims to computationally synthesize molecular staining patterns from routine Hematoxylin and Eosin (H&E) images.<n>We propose Pathology-Aware Integrated Next-Scale Transformation (PAINT), a visual autoregressive framework that reformulates the synthesis process as a structure-first conditional generation task.
arXiv Detail & Related papers (2026-01-22T14:49:30Z) - Topology-aware Pathological Consistency Matching for Weakly-Paired IHC Virtual Staining [37.3879490506952]
We propose a novel topology-aware framework for H&E-to-IHC virtual staining.<n>Specifically, we introduce a Topology-aware Consistency Matching mechanism that employs graph contrastive learning and topological perturbations.<n>Our method outperforms state-of-the-art approaches, achieving superior generation quality with higher clinical relevance.
arXiv Detail & Related papers (2026-01-06T08:28:38Z) - InpaintHuman: Reconstructing Occluded Humans with Multi-Scale UV Mapping and Identity-Preserving Diffusion Inpainting [64.42884719282323]
InpaintHuman is a novel method for generating high-fidelity, complete, and animatable avatars from occluded monocular videos.<n>Our approach employs direct pixel-level supervision to ensure identity fidelity.
arXiv Detail & Related papers (2026-01-05T13:26:02Z) - A Semantically Enhanced Generative Foundation Model Improves Pathological Image Synthesis [82.01597026329158]
We introduce a Correlation-Regulated Alignment Framework for Tissue Synthesis (CRAFTS) for pathology-specific text-to-image synthesis.<n>CRAFTS incorporates a novel alignment mechanism that suppresses semantic drift to ensure biological accuracy.<n>This model generates diverse pathological images spanning 30 cancer types, with quality rigorously validated by objective metrics and pathologist evaluations.
arXiv Detail & Related papers (2025-12-15T10:22:43Z) - Progressive Translation of H&E to IHC with Enhanced Structural Fidelity [8.881744407746845]
Compared to hematoxylin-eosin (H&E) staining, gradientchemistry (IHC) provides high-resolution protein localization.<n>Despite its diagnostic value, IHC remains a costly and labor-intensive technique.<n>We propose a novel network architecture that follows a progressive structure, incorporating color and cell border generation logic.
arXiv Detail & Related papers (2025-11-03T16:06:46Z) - From Pixels to Pathology: Restoration Diffusion for Diagnostic-Consistent Virtual IHC [37.284994932355865]
We introduce Star-Diff, a structure-aware staining restoration diffusion model that reformulates virtual staining as an image restoration task.<n>By combining residual and noise-based generation pathways, Star-Diff maintains tissue structure while modeling realistic biomarker variability.<n> Experiments on the BCI dataset demonstrate that Star-Diff achieves state-of-the-art (SOTA) performance in both visual fidelity and diagnostic relevance.
arXiv Detail & Related papers (2025-08-04T15:36:58Z) - AURORA: Augmented Understanding via Structured Reasoning and Reinforcement Learning for Reference Audio-Visual Segmentation [113.75682363364004]
AURORA is a framework designed to enhance genuine reasoning and language comprehension in reference audio-visual segmentation.<n>AURORA achieves state-of-the-art performance on Ref-AVS benchmarks and generalizes effectively to unreferenced segmentation.
arXiv Detail & Related papers (2025-08-04T07:47:38Z) - Score-based Diffusion Model for Unpaired Virtual Histology Staining [7.648204151998162]
Hematoxylin and eosin (H&E) staining visualizes histology but lacks specificity for diagnostic markers.<n>Hematoxylin and eosin (H&E) staining provides protein-targeted staining but is restricted by tissue availability and antibody specificity.<n>Virtual staining, i.e., translating the H&E image to its IHC counterpart while preserving tissue structure, is promising for efficient IHC generation.<n>This study proposes a mutual-information (MI)-guided score-based diffusion model for unpaired virtual staining.
arXiv Detail & Related papers (2025-06-29T11:02:45Z) - CRIA: A Cross-View Interaction and Instance-Adapted Pre-training Framework for Generalizable EEG Representations [52.251569042852815]
CRIA is an adaptive framework that utilizes variable-length and variable-channel coding to achieve a unified representation of EEG data across different datasets.<n>The model employs a cross-attention mechanism to fuse temporal, spectral, and spatial features effectively.<n> Experimental results on the Temple University EEG corpus and the CHB-MIT dataset show that CRIA outperforms existing methods with the same pre-training conditions.
arXiv Detail & Related papers (2025-06-19T06:31:08Z) - SCFANet: Style Distribution Constraint Feature Alignment Network For Pathological Staining Translation [0.11999555634662631]
Style Distribution Constraint Feature Alignment Network (SCFANet)<n>SCFANet incorporates two innovative modules: the Style Distribution Constrainer (SDC) and Feature Alignment Learning (FAL)<n>Our SCFANet model outperforms existing methods, achieving precise transformation of H&E-stained images into their IHC-stained counterparts.
arXiv Detail & Related papers (2025-04-01T07:29:53Z) - Learning to Align and Refine: A Foundation-to-Diffusion Framework for Occlusion-Robust Two-Hand Reconstruction [50.952228546326516]
Two-hand reconstruction from monocular images faces persistent challenges due to complex and dynamic hand postures.<n>Existing approaches struggle with such alignment issues, often resulting in misalignment and penetration artifacts.<n>We propose a dual-stage Foundation-to-Diffusion framework that precisely align 2D prior guidance from vision foundation models.
arXiv Detail & Related papers (2025-03-22T14:42:27Z) - Cross-Modal Consistency Learning for Sign Language Recognition [92.44927164283641]
Existing pre-training methods solely focus on the compact pose data.<n>We propose a Cross-modal Consistency Learning framework (CCL- SLR)<n>CCL- SLR learns from both RGB and pose modalities based on self-supervised pre-training.
arXiv Detail & Related papers (2025-03-16T12:34:07Z) - MIRROR: Multi-Modal Pathological Self-Supervised Representation Learning via Modality Alignment and Retention [57.044719143401664]
Histopathology and transcriptomics are fundamental modalities in oncology, encapsulating the morphological and molecular aspects of the disease.<n>We present MIRROR, a novel multi-modal representation learning method designed to foster both modality alignment and retention.<n>Extensive evaluations on TCGA cohorts for cancer subtyping and survival analysis highlight MIRROR's superior performance.
arXiv Detail & Related papers (2025-03-01T07:02:30Z) - Pathological Semantics-Preserving Learning for H&E-to-IHC Virtual Staining [4.42401958204836]
We propose a Pathological Semantics-Preserving Learning method for Virtual Staining.
PSPStain incorporates the molecular-level semantic information and enhances semantics interaction.
PSPStain outperforms current state-of-the-art H&E-to-IHC virtual staining methods.
arXiv Detail & Related papers (2024-07-04T05:54:00Z) - Learning Multiscale Consistency for Self-supervised Electron Microscopy
Instance Segmentation [48.267001230607306]
We propose a pretraining framework that enhances multiscale consistency in EM volumes.
Our approach leverages a Siamese network architecture, integrating strong and weak data augmentations.
It effectively captures voxel and feature consistency, showing promise for learning transferable representations for EM analysis.
arXiv Detail & Related papers (2023-08-19T05:49:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.