Unpaired Image Enhancement with Quality-Attention Generative Adversarial
Network
- URL: http://arxiv.org/abs/2012.15052v1
- Date: Wed, 30 Dec 2020 05:57:20 GMT
- Title: Unpaired Image Enhancement with Quality-Attention Generative Adversarial
Network
- Authors: Zhangkai Ni, Wenhan Yang, Shiqi Wang, Lin Ma, and Sam Kwong
- Abstract summary: We propose a quality attention generative adversarial network (QAGAN) trained on unpaired data.
Key novelty of the proposed QAGAN lies in the injected QAM for the generator.
Our proposed method achieves better performance in both objective and subjective evaluations.
- Score: 92.01145655155374
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we aim to learn an unpaired image enhancement model, which can
enrich low-quality images with the characteristics of high-quality images
provided by users. We propose a quality attention generative adversarial
network (QAGAN) trained on unpaired data based on the bidirectional Generative
Adversarial Network (GAN) embedded with a quality attention module (QAM). The
key novelty of the proposed QAGAN lies in the injected QAM for the generator
such that it learns domain-relevant quality attention directly from the two
domains. More specifically, the proposed QAM allows the generator to
effectively select semantic-related characteristics from the spatial-wise and
adaptively incorporate style-related attributes from the channel-wise,
respectively. Therefore, in our proposed QAGAN, not only discriminators but
also the generator can directly access both domains which significantly
facilitates the generator to learn the mapping function. Extensive experimental
results show that, compared with the state-of-the-art methods based on unpaired
learning, our proposed method achieves better performance in both objective and
subjective evaluations.
Related papers
- DCNN: Dual Cross-current Neural Networks Realized Using An Interactive Deep Learning Discriminator for Fine-grained Objects [48.65846477275723]
This study proposes novel dual-current neural networks (DCNN) to improve the accuracy of fine-grained image classification.
The main novel design features for constructing a weakly supervised learning backbone model DCNN include (a) extracting heterogeneous data, (b) keeping the feature map resolution unchanged, (c) expanding the receptive field, and (d) fusing global representations and local features.
arXiv Detail & Related papers (2024-05-07T07:51:28Z) - Contrastive Pre-Training with Multi-View Fusion for No-Reference Point Cloud Quality Assessment [49.36799270585947]
No-reference point cloud quality assessment (NR-PCQA) aims to automatically evaluate the perceptual quality of distorted point clouds without available reference.
We propose a novel contrastive pre-training framework tailored for PCQA (CoPA)
Our method outperforms the state-of-the-art PCQA methods on popular benchmarks.
arXiv Detail & Related papers (2024-03-15T07:16:07Z) - Adaptive Feature Selection for No-Reference Image Quality Assessment by Mitigating Semantic Noise Sensitivity [55.399230250413986]
We propose a Quality-Aware Feature Matching IQA Metric (QFM-IQM) to remove harmful semantic noise features from the upstream task.
Our approach achieves superior performance to the state-of-the-art NR-IQA methods on eight standard IQA datasets.
arXiv Detail & Related papers (2023-12-11T06:50:27Z) - StyleAM: Perception-Oriented Unsupervised Domain Adaption for
Non-reference Image Quality Assessment [23.289183622856704]
We propose an effective perception-oriented unsupervised domain adaptation method StyleAM for NR-IQA.
StyleAM transfers sufficient knowledge from label-rich source domain data to label-free target domain images via Style Alignment and Mixup.
Experiments on two typical cross-domain settings have demonstrated the effectiveness of our proposed StyleAM on NR-IQA.
arXiv Detail & Related papers (2022-07-29T05:51:18Z) - Attentions Help CNNs See Better: Attention-based Hybrid Image Quality
Assessment Network [20.835800149919145]
Image quality assessment (IQA) algorithm aims to quantify the human perception of image quality.
There is a performance drop when assessing distortion images generated by generative adversarial network (GAN) with seemingly realistic texture.
We propose an Attention-based Hybrid Image Quality Assessment Network (AHIQ) to deal with the challenge and get better performance on the GAN-based IQA task.
arXiv Detail & Related papers (2022-04-22T03:59:18Z) - No-Reference Image Quality Assessment via Transformers, Relative
Ranking, and Self-Consistency [38.88541492121366]
The goal of No-Reference Image Quality Assessment (NR-IQA) is to estimate the perceptual image quality in accordance with subjective evaluations.
We propose a novel model to address the NR-IQA task by leveraging a hybrid approach that benefits from Convolutional Neural Networks (CNNs) and self-attention mechanism in Transformers.
arXiv Detail & Related papers (2021-08-16T02:07:08Z) - Region-Adaptive Deformable Network for Image Quality Assessment [16.03642709194366]
In image restoration and enhancement tasks, images generated by generative adversarial networks (GAN) can achieve better visual performance than traditional CNN-generated images.
We propose the reference-oriented deformable convolution, which can improve the performance of an IQA network on GAN-based distortion.
Experiment results on the NTIRE 2021 Perceptual Image Quality Assessment Challenge dataset show the superior performance of RADN.
arXiv Detail & Related papers (2021-04-23T13:47:20Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z) - Adversarial Bipartite Graph Learning for Video Domain Adaptation [50.68420708387015]
Domain adaptation techniques, which focus on adapting models between distributionally different domains, are rarely explored in the video recognition area.
Recent works on visual domain adaptation which leverage adversarial learning to unify the source and target video representations are not highly effective on the videos.
This paper proposes an Adversarial Bipartite Graph (ABG) learning framework which directly models the source-target interactions.
arXiv Detail & Related papers (2020-07-31T03:48:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.