Adaptive H&E-IHC information fusion staining framework based on feature extra
- URL: http://arxiv.org/abs/2502.20156v1
- Date: Thu, 27 Feb 2025 14:55:34 GMT
- Title: Adaptive H&E-IHC information fusion staining framework based on feature extra
- Authors: Yifan Jia, Xingda Yu, Zhengyang Ji, Songning Lai, Yutao Yue,
- Abstract summary: Immunotruth (IHC) staining plays a significant role in the evaluation of diseases such as breast cancer.<n>H&E-to-IHC transformation based on generative models provides a simple and cost-effective method for obtaining IHC images.<n>The lack of pixel-perfect H&E-IHC ground pairs poses a challenge to the classical L1 loss.<n>We propose an adaptive information enhanced coloring framework based on feature extractors.
- Score: 0.5242869847419834
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Immunohistochemistry (IHC) staining plays a significant role in the evaluation of diseases such as breast cancer. The H&E-to-IHC transformation based on generative models provides a simple and cost-effective method for obtaining IHC images. Although previous models can perform digital coloring well, they still suffer from (i) coloring only through the pixel features that are not prominent in HE, which is easy to cause information loss in the coloring process; (ii) The lack of pixel-perfect H&E-IHC groundtruth pairs poses a challenge to the classical L1 loss.To address the above challenges, we propose an adaptive information enhanced coloring framework based on feature extractors. We first propose the VMFE module to effectively extract the color information features using multi-scale feature extraction and wavelet transform convolution, while combining the shared decoder for feature fusion. The high-performance dual feature extractor of H&E-IHC is trained by contrastive learning, which can effectively perform feature alignment of HE-IHC in high latitude space. At the same time, the trained feature encoder is used to enhance the features and adaptively adjust the loss in the HE section staining process to solve the problems related to unclear and asymmetric information. We have tested on different datasets and achieved excellent performance.Our code is available at https://github.com/babyinsunshine/CEFF
Related papers
- SCFANet: Style Distribution Constraint Feature Alignment Network For Pathological Staining Translation [0.11999555634662631]
Style Distribution Constraint Feature Alignment Network (SCFANet)
SCFANet incorporates two innovative modules: the Style Distribution Constrainer (SDC) and Feature Alignment Learning (FAL)
Our SCFANet model outperforms existing methods, achieving precise transformation of H&E-stained images into their IHC-stained counterparts.
arXiv Detail & Related papers (2025-04-01T07:29:53Z) - DCEvo: Discriminative Cross-Dimensional Evolutionary Learning for Infrared and Visible Image Fusion [58.36400052566673]
Infrared and visible image fusion integrates information from distinct spectral bands to enhance image quality.
Existing approaches treat image fusion and subsequent high-level tasks as separate processes.
We propose a Discriminative Cross- Dimension Evolutionary Learning Framework, termed DCEvo, which simultaneously enhances visual quality and perception accuracy.
arXiv Detail & Related papers (2025-03-22T07:01:58Z) - Decouple to Reconstruct: High Quality UHD Restoration via Active Feature Disentanglement and Reversible Fusion [77.08942160610478]
Ultra-high-definition (UHD) image restoration often faces computational bottlenecks and information loss due to its extremely high resolution.
We propose a Controlled Differential Disentangled VAE that discards easily recoverable background information while encoding more difficult-to-recover degraded information into latent space.
Our method effectively alleviates the information loss problem in VAE models while ensuring computational efficiency, significantly improving the quality of UHD image restoration, and achieves state-of-the-art results in six UHD restoration tasks with only 1M parameters.
arXiv Detail & Related papers (2025-03-17T02:55:18Z) - HVI: A New Color Space for Low-light Image Enhancement [58.8280819306909]
We propose a new color space for Low-Light Image Enhancement (LLIE) based on Horizontal/Vertical-Intensity (HVI)
HVI is defined by polarized HS maps and learnable intensity, while the latter compresses the low-light regions to remove the black artifacts.
To fully leverage the chromatic and intensity information, a novel Color and Intensity Decoupling Network (CIDNet) is introduced.
arXiv Detail & Related papers (2025-02-27T16:59:51Z) - UR2P-Dehaze: Learning a Simple Image Dehaze Enhancer via Unpaired Rich Physical Prior [8.713784455593778]
We propose an unpaired image dehazing network, called the Simple Image Dehaze Enhancer via Unpaired Rich Physical Prior (UR2P-Dehaze)<n>First, to accurately estimate the illumination, reflectance, and color information of the hazy image, we design a shared prior estimator (SPE) that is iteratively trained to ensure the consistency of illumination and reflectance.<n>Next, we propose Dynamic Wavelet Separable Convolution (DWSC), which effectively integrates key features across both low and high frequencies.
arXiv Detail & Related papers (2025-01-12T14:21:05Z) - Scalable, Trustworthy Generative Model for Virtual Multi-Staining from H&E Whole Slide Images [0.0]
Chemical staining methods are dependable but require extensive time, expensive chemicals, and raise environmental concerns.
Generative AI technologies are pivotal in addressing these issues.
Our work introduces the use of generative AI for virtual staining, aiming to enhance performance, trustworthiness, scalability, and adaptability in computational pathology.
arXiv Detail & Related papers (2024-06-26T21:52:05Z) - Hyperspectral Reconstruction of Skin Through Fusion of Scattering Transform Features [2.180368095276185]
ICASSP 2024 'Hyper-Skin' Challenge is to extract skin HSI from matching RGB images and an infrared band.
Our model matches and inverts those features, rather than the pixel values, reducing the complexity of matching.
arXiv Detail & Related papers (2024-04-15T13:34:27Z) - You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement [50.37253008333166]
Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images.
We propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI)
It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters.
arXiv Detail & Related papers (2024-02-08T16:47:43Z) - Color Equivariant Convolutional Networks [50.655443383582124]
CNNs struggle if there is data imbalance between color variations introduced by accidental recording conditions.
We propose Color Equivariant Convolutions ( CEConvs), a novel deep learning building block that enables shape feature sharing across the color spectrum.
We demonstrate the benefits of CEConvs in terms of downstream performance to various tasks and improved robustness to color changes, including train-test distribution shifts.
arXiv Detail & Related papers (2023-10-30T09:18:49Z) - ESSAformer: Efficient Transformer for Hyperspectral Image
Super-resolution [76.7408734079706]
Single hyperspectral image super-resolution (single-HSI-SR) aims to restore a high-resolution hyperspectral image from a low-resolution observation.
We propose ESSAformer, an ESSA attention-embedded Transformer network for single-HSI-SR with an iterative refining structure.
arXiv Detail & Related papers (2023-07-26T07:45:14Z) - Adaptive Supervised PatchNCE Loss for Learning H&E-to-IHC Stain
Translation with Inconsistent Groundtruth Image Pairs [5.841841666625825]
We present a new loss function, Adaptive Supervised PatchNCE (ASP), to deal with the input to target inconsistencies in a proposed H&E-to-IHC image-to-image translation framework.
In our experiment, we demonstrate that our proposed method outperforms existing image-to-image translation methods for stain translation to multiple IHC stains.
arXiv Detail & Related papers (2023-03-10T19:56:34Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z) - Understanding Brain Dynamics for Color Perception using Wearable EEG
headband [0.46335240643629344]
We have designed a multiclass classification model to detect the primary colors from the features of raw EEG signals.
Our method employs spectral power features, statistical features as well as correlation features from the signal band power obtained from continuous Morlet wavelet transform.
Our proposed methodology gave the best overall accuracy of 80.6% for intra-subject classification.
arXiv Detail & Related papers (2020-08-17T05:25:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.