FFusionCGAN: An end-to-end fusion method for few-focus images using
conditional GAN in cytopathological digital slides
- URL: http://arxiv.org/abs/2001.00692v1
- Date: Fri, 3 Jan 2020 02:13:47 GMT
- Title: FFusionCGAN: An end-to-end fusion method for few-focus images using
conditional GAN in cytopathological digital slides
- Authors: Xiebo Geng (1 and 4), Sibo Liua (1 and 4), Wei Han (1), Xu Li (1),
Jiabo Ma (1), Jingya Yu (1), Xiuli Liu (1), Sahoqun Zeng (1), Li Chen (2 and
3), Shenghua Cheng (1 and 3) ((1) Britton Chance Center for Biomedical
Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University
of Science and Technology, China,(2) Department of Clinical Laboratory,
Tongji Hospital, Huazhong University of Science and Technology, China, (3)
Corresponding author, (4) Equal contribution to this work)
- Abstract summary: Multi-focus image fusion technologies compress different focus depth images into an image in which most objects are in focus.
This paper proposes a novel method for generating fused images from single-focus or few-focus images based on conditional generative adversarial network (GAN)
By integrating the network into the generative model, the quality of the generated fused images is effectively improved.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-focus image fusion technologies compress different focus depth images
into an image in which most objects are in focus. However, although existing
image fusion techniques, including traditional algorithms and deep
learning-based algorithms, can generate high-quality fused images, they need
multiple images with different focus depths in the same field of view. This
criterion may not be met in some cases where time efficiency is required or the
hardware is insufficient. The problem is especially prominent in large-size
whole slide images. This paper focused on the multi-focus image fusion of
cytopathological digital slide images, and proposed a novel method for
generating fused images from single-focus or few-focus images based on
conditional generative adversarial network (GAN). Through the adversarial
learning of the generator and discriminator, the method is capable of
generating fused images with clear textures and large depth of field. Combined
with the characteristics of cytopathological images, this paper designs a new
generator architecture combining U-Net and DenseBlock, which can effectively
improve the network's receptive field and comprehensively encode image
features. Meanwhile, this paper develops a semantic segmentation network that
identifies the blurred regions in cytopathological images. By integrating the
network into the generative model, the quality of the generated fused images is
effectively improved. Our method can generate fused images from only
single-focus or few-focus images, thereby avoiding the problem of collecting
multiple images of different focus depths with increased time and hardware
costs. Furthermore, our model is designed to learn the direct mapping of input
source images to fused images without the need to manually design complex
activity level measurements and fusion rules as in traditional methods.
Related papers
- Fusion from Decomposition: A Self-Supervised Approach for Image Fusion and Beyond [74.96466744512992]
The essence of image fusion is to integrate complementary information from source images.
DeFusion++ produces versatile fused representations that can enhance the quality of image fusion and the effectiveness of downstream high-level vision tasks.
arXiv Detail & Related papers (2024-10-16T06:28:49Z) - FreeCompose: Generic Zero-Shot Image Composition with Diffusion Prior [50.0535198082903]
We offer a novel approach to image composition, which integrates multiple input images into a single, coherent image.
We showcase the potential of utilizing the powerful generative prior inherent in large-scale pre-trained diffusion models to accomplish generic image composition.
arXiv Detail & Related papers (2024-07-06T03:35:43Z) - Bridging the Gap between Multi-focus and Multi-modal: A Focused
Integration Framework for Multi-modal Image Fusion [5.417493475406649]
Multi-modal image fusion (MMIF) integrates valuable information from different modality images into a fused one.
This paper proposes a MMIF framework for joint focused integration and modalities information extraction.
The proposed algorithm can surpass the state-of-the-art methods in visual perception and quantitative evaluation.
arXiv Detail & Related papers (2023-11-03T12:58:39Z) - Generation and Recombination for Multifocus Image Fusion with Free
Number of Inputs [17.32596568119519]
Multifocus image fusion is an effective way to overcome the limitation of optical lenses.
Previous methods assume that the focused areas of the two source images are complementary, making it impossible to achieve simultaneous fusion of multiple images.
In GRFusion, focus property detection of each source image can be implemented independently, enabling simultaneous fusion of multiple source images.
arXiv Detail & Related papers (2023-09-09T01:47:56Z) - A Task-guided, Implicitly-searched and Meta-initialized Deep Model for
Image Fusion [69.10255211811007]
We present a Task-guided, Implicit-searched and Meta- generalizationd (TIM) deep model to address the image fusion problem in a challenging real-world scenario.
Specifically, we propose a constrained strategy to incorporate information from downstream tasks to guide the unsupervised learning process of image fusion.
Within this framework, we then design an implicit search scheme to automatically discover compact architectures for our fusion model with high efficiency.
arXiv Detail & Related papers (2023-05-25T08:54:08Z) - CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature
Ensemble for Multi-modality Image Fusion [72.8898811120795]
We propose a coupled contrastive learning network, dubbed CoCoNet, to realize infrared and visible image fusion.
Our method achieves state-of-the-art (SOTA) performance under both subjective and objective evaluation.
arXiv Detail & Related papers (2022-11-20T12:02:07Z) - Joint Learning of Deep Texture and High-Frequency Features for
Computer-Generated Image Detection [24.098604827919203]
We propose a joint learning strategy with deep texture and high-frequency features for CG image detection.
A semantic segmentation map is generated to guide the affine transformation operation.
The combination of the original image and the high-frequency components of the original and rendered images are fed into a multi-branch neural network equipped with attention mechanisms.
arXiv Detail & Related papers (2022-09-07T17:30:40Z) - Deep Image Compositing [93.75358242750752]
We propose a new method which can automatically generate high-quality image composites without any user input.
Inspired by Laplacian pyramid blending, a dense-connected multi-stream fusion network is proposed to effectively fuse the information from the foreground and background images.
Experiments show that the proposed method can automatically generate high-quality composites and outperforms existing methods both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-11-04T06:12:24Z) - End-to-End Learning for Simultaneously Generating Decision Map and
Multi-Focus Image Fusion Result [7.564462759345851]
The aim of multi-focus image fusion is to gather focused regions of different images to generate a unique all-in-focus fused image.
Most of the existing deep learning structures failed to balance fusion quality and end-to-end implementation convenience.
We propose a cascade network to simultaneously generate decision map and fused result with an end-to-end training procedure.
arXiv Detail & Related papers (2020-10-17T09:09:51Z) - MFIF-GAN: A New Generative Adversarial Network for Multi-Focus Image
Fusion [29.405149234582623]
Multi-Focus Image Fusion (MFIF) is a promising technique to obtain all-in-focus images.
One of the research trends of MFIF is to avoid the defocus spread effect (DSE) around the focus/defocus boundary (FDB)
We propose a network termed MFIF-GAN to generate focus maps in which the foreground region are correctly larger than the corresponding objects.
arXiv Detail & Related papers (2020-09-21T09:36:34Z) - Real-MFF: A Large Realistic Multi-focus Image Dataset with Ground Truth [58.226535803985804]
We introduce a large and realistic multi-focus dataset called Real-MFF.
The dataset contains 710 pairs of source images with corresponding ground truth images.
We evaluate 10 typical multi-focus algorithms on this dataset for the purpose of illustration.
arXiv Detail & Related papers (2020-03-28T12:33:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.