HQG-Net: Unpaired Medical Image Enhancement with High-Quality Guidance
- URL: http://arxiv.org/abs/2307.07829v1
- Date: Sat, 15 Jul 2023 15:26:25 GMT
- Title: HQG-Net: Unpaired Medical Image Enhancement with High-Quality Guidance
- Authors: Chunming He, Kai Li, Guoxia Xu, Jiangpeng Yan, Longxiang Tang, Yulun
Zhang, Xiu Li and Yaowei Wang
- Abstract summary: Unpaired Medical Image Enhancement (UMIE) aims to transform a low-quality (LQ) medical image into a high-quality (HQ) one without relying on paired images for training.
We propose a novel UMIE approach that avoids the above limitation of existing methods by directly encoding HQ cues into the LQ enhancement process.
We train the enhancement network adversarially with a discriminator to ensure the generated HQ image falls into the HQ domain.
- Score: 45.84780456554191
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unpaired Medical Image Enhancement (UMIE) aims to transform a low-quality
(LQ) medical image into a high-quality (HQ) one without relying on paired
images for training. While most existing approaches are based on
Pix2Pix/CycleGAN and are effective to some extent, they fail to explicitly use
HQ information to guide the enhancement process, which can lead to undesired
artifacts and structural distortions. In this paper, we propose a novel UMIE
approach that avoids the above limitation of existing methods by directly
encoding HQ cues into the LQ enhancement process in a variational fashion and
thus model the UMIE task under the joint distribution between the LQ and HQ
domains. Specifically, we extract features from an HQ image and explicitly
insert the features, which are expected to encode HQ cues, into the enhancement
network to guide the LQ enhancement with the variational normalization module.
We train the enhancement network adversarially with a discriminator to ensure
the generated HQ image falls into the HQ domain. We further propose a
content-aware loss to guide the enhancement process with wavelet-based
pixel-level and multi-encoder-based feature-level constraints. Additionally, as
a key motivation for performing image enhancement is to make the enhanced
images serve better for downstream tasks, we propose a bi-level learning scheme
to optimize the UMIE task and downstream tasks cooperatively, helping generate
HQ images both visually appealing and favorable for downstream tasks.
Experiments on three medical datasets, including two newly collected datasets,
verify that the proposed method outperforms existing techniques in terms of
both enhancement quality and downstream task performance. We will make the code
and the newly collected datasets publicly available for community study.
Related papers
- G-Refine: A General Quality Refiner for Text-to-Image Generation [74.16137826891827]
We introduce G-Refine, a general image quality refiner designed to enhance low-quality images without compromising integrity of high-quality ones.
The model is composed of three interconnected modules: a perception quality indicator, an alignment quality indicator, and a general quality enhancement module.
Extensive experimentation reveals that AIGIs after G-Refine outperform in 10+ quality metrics across 4 databases.
arXiv Detail & Related papers (2024-04-29T00:54:38Z) - Dual Associated Encoder for Face Restoration [68.49568459672076]
We propose a novel dual-branch framework named DAEFR to restore facial details from low-quality (LQ) images.
Our method introduces an auxiliary LQ branch that extracts crucial information from the LQ inputs.
We evaluate the effectiveness of DAEFR on both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-08-14T17:58:33Z) - A Dive into SAM Prior in Image Restoration [40.03648504115027]
The goal of image restoration (IR) is to restore a high-quality (HQ) image from its degraded low-quality (LQ) observation.
We propose a lightweight SAM prior tuning (SPT) unit to integrate semantic priors into existing IR networks.
As the only trainable module in our method, the SPT unit has the potential to improve both efficiency and scalability.
arXiv Detail & Related papers (2023-05-23T02:31:06Z) - Re-IQA: Unsupervised Learning for Image Quality Assessment in the Wild [38.197794061203055]
We propose a Mixture of Experts approach to train two separate encoders to learn high-level content and low-level image quality features in an unsupervised setting.
We deploy the complementary low and high-level image representations obtained from the Re-IQA framework to train a linear regression model.
Our method achieves state-of-the-art performance on multiple large-scale image quality assessment databases.
arXiv Detail & Related papers (2023-04-02T05:06:51Z) - GAN Inversion for Image Editing via Unsupervised Domain Adaptation [18.328386420520978]
We propose Unsupervised Domain Adaptation (UDA) in the inversion process, namely UDA-inversion, for effective inversion and editing of both HQ and LQ images.
UDA-Inversion achieves a better PSNR of 22.14 on FFHQ dataset and performs comparably to supervised methods.
arXiv Detail & Related papers (2022-11-22T09:51:24Z) - Unpaired Image Enhancement with Quality-Attention Generative Adversarial
Network [92.01145655155374]
We propose a quality attention generative adversarial network (QAGAN) trained on unpaired data.
Key novelty of the proposed QAGAN lies in the injected QAM for the generator.
Our proposed method achieves better performance in both objective and subjective evaluations.
arXiv Detail & Related papers (2020-12-30T05:57:20Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z) - Early Exit or Not: Resource-Efficient Blind Quality Enhancement for
Compressed Images [54.40852143927333]
Lossy image compression is pervasively conducted to save communication bandwidth, resulting in undesirable compression artifacts.
We propose a resource-efficient blind quality enhancement (RBQE) approach for compressed images.
Our approach can automatically decide to terminate or continue enhancement according to the assessed quality of enhanced images.
arXiv Detail & Related papers (2020-06-30T07:38:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.