Breast Cancer Immunohistochemical Image Generation: a Benchmark Dataset
and Challenge Review
- URL: http://arxiv.org/abs/2305.03546v2
- Date: Fri, 22 Sep 2023 08:14:42 GMT
- Title: Breast Cancer Immunohistochemical Image Generation: a Benchmark Dataset
and Challenge Review
- Authors: Chuang Zhu, Shengjie Liu, Zekuan Yu, Feng Xu, Arpit Aggarwal, Germ\'an
Corredor, Anant Madabhushi, Qixun Qu, Hongwei Fan, Fangda Li, Yueheng Li,
Xianchao Guan, Yongbing Zhang, Vivek Kumar Singh, Farhan Akram, Md. Mostafa
Kamal Sarker, Zhongyue Shi, Mulan Jin
- Abstract summary: We held the breast cancerchemical image generation challenge, aiming to explore novel ideas of deep learning technology in pathological image generation.
The challenge provided registered H&E and IHC-stained image pairs, and participants were required to use these images to train a model that can directly generate IHC-stained images from corresponding H&E-stained images.
We selected and reviewed the five highest-ranking methods based on their PSNR and SSIM metrics, while also providing overviews of the corresponding pipelines and implementations.
- Score: 17.649693088941508
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For invasive breast cancer, immunohistochemical (IHC) techniques are often
used to detect the expression level of human epidermal growth factor receptor-2
(HER2) in breast tissue to formulate a precise treatment plan. From the
perspective of saving manpower, material and time costs, directly generating
IHC-stained images from Hematoxylin and Eosin (H&E) stained images is a
valuable research direction. Therefore, we held the breast cancer
immunohistochemical image generation challenge, aiming to explore novel ideas
of deep learning technology in pathological image generation and promote
research in this field. The challenge provided registered H&E and IHC-stained
image pairs, and participants were required to use these images to train a
model that can directly generate IHC-stained images from corresponding
H&E-stained images. We selected and reviewed the five highest-ranking methods
based on their PSNR and SSIM metrics, while also providing overviews of the
corresponding pipelines and implementations. In this paper, we further analyze
the current limitations in the field of breast cancer immunohistochemical image
generation and forecast the future development of this field. We hope that the
released dataset and the challenge will inspire more scholars to jointly study
higher-quality IHC-stained image generation.
Related papers
- Advancing H&E-to-IHC Stain Translation in Breast Cancer: A Multi-Magnification and Attention-Based Approach [13.88935300094334]
We propose a novel model integrating attention mechanisms and multi-magnification information processing.
Our model employs a multi-magnification processing strategy to extract and utilize information from various magnifications within pathology images.
Rigorous testing on a publicly available breast cancer dataset demonstrates superior performance compared to existing methods.
arXiv Detail & Related papers (2024-08-04T04:55:10Z) - Deep learning-based instance segmentation for the precise automated
quantification of digital breast cancer immunohistochemistry images [1.8434042562191815]
We demonstrate the feasibility of using a deep learning-based instance segmentation architecture for the automatic quantification of both nuclear and membrane biomarkers applied to IHC-stained slides.
We have collected annotations over samples of HE, ER and Ki-67 (nuclear biomarkers) and HER2 (membrane biomarker) IHC-stained images.
We have trained two models, so-called nuclei- and membrane-aware segmentation models, which, once successfully validated, have revealed to be a promising method to segment nuclei instances in IHC-stained images.
arXiv Detail & Related papers (2023-11-22T22:23:47Z) - BiomedJourney: Counterfactual Biomedical Image Generation by
Instruction-Learning from Multimodal Patient Journeys [99.7082441544384]
We present BiomedJourney, a novel method for counterfactual biomedical image generation by instruction-learning.
We use GPT-4 to process the corresponding imaging reports and generate a natural language description of disease progression.
The resulting triples are then used to train a latent diffusion model for counterfactual biomedical image generation.
arXiv Detail & Related papers (2023-10-16T18:59:31Z) - A Generative Approach for Image Registration of Visible-Thermal (VT)
Cancer Faces [77.77475333490744]
We modernize the classic computer vision task of image registration by applying and modifying a generative alignment algorithm.
We demonstrate that the quality of thermal images produced in the generative AI downstream task of Visible-to-Thermal (V2T) image translation significantly improves up to 52.5%.
arXiv Detail & Related papers (2023-08-23T17:39:58Z) - Label- and slide-free tissue histology using 3D epi-mode quantitative
phase imaging and virtual H&E staining [1.3141683929245986]
Histological staining of tissue biopsies serves as benchmark for disease diagnosis and comprehensive clinical assessment of tissue.
We combine emerging 3D quantitative phase imaging technology, termed quantitative oblique back illumination microscopy (qOBM), with an unsupervised generative adversarial network pipeline.
We demonstrate that the approach achieves high-fidelity conversions to H&E with subcellular detail using fresh tissue specimens from mouse liver, rat gliosarcoma, and human gliomas.
arXiv Detail & Related papers (2023-06-01T11:09:31Z) - Cross-modulated Few-shot Image Generation for Colorectal Tissue
Classification [58.147396879490124]
Our few-shot generation method, named XM-GAN, takes one base and a pair of reference tissue images as input and generates high-quality yet diverse images.
To the best of our knowledge, we are the first to investigate few-shot generation in colorectal tissue images.
arXiv Detail & Related papers (2023-04-04T17:50:30Z) - BCI: Breast Cancer Immunohistochemical Image Generation through Pyramid
Pix2pix [8.82904507522587]
The evaluation of human epidermal growth factor receptor 2 (HER2) expression is essential to formulate a precise treatment for breast cancer.
For the first time, we propose a breast cancerchemical (BCI) benchmark attempting to synthesize IHC data directly with the paired hematoxylin and eosin stained images.
The dataset contains 4870 registered image pairs, covering a variety of HER2 expression levels.
arXiv Detail & Related papers (2022-04-25T04:00:47Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - Texture Characterization of Histopathologic Images Using Ecological
Diversity Measures and Discrete Wavelet Transform [82.53597363161228]
This paper proposes a method for characterizing texture across histopathologic images with a considerable success rate.
It is possible to quantify the intrinsic properties of such images with promising accuracy on two HI datasets.
arXiv Detail & Related papers (2022-02-27T02:19:09Z) - MammoGANesis: Controlled Generation of High-Resolution Mammograms for
Radiology Education [0.0]
We train a generative adversarial network (GAN) to synthesize 512 x 512 high-resolution mammograms.
The resulting model leads to the unsupervised separation of high-level features.
We demonstrate the model's ability to generate and medically relevant mammograms by achieving an average AUC of 0.54 in a double-blind study.
arXiv Detail & Related papers (2020-10-11T06:47:56Z) - COVIDNet-CT: A Tailored Deep Convolutional Neural Network Design for
Detection of COVID-19 Cases from Chest CT Images [75.74756992992147]
We introduce COVIDNet-CT, a deep convolutional neural network architecture that is tailored for detection of COVID-19 cases from chest CT images.
We also introduce COVIDx-CT, a benchmark CT image dataset derived from CT imaging data collected by the China National Center for Bioinformation.
arXiv Detail & Related papers (2020-09-08T15:49:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.