Twice Mixing: A Rank Learning based Quality Assessment Approach for
Underwater Image Enhancement
- URL: http://arxiv.org/abs/2102.00670v1
- Date: Mon, 1 Feb 2021 07:13:39 GMT
- Title: Twice Mixing: A Rank Learning based Quality Assessment Approach for
Underwater Image Enhancement
- Authors: Zhenqi Fu, Xueyang Fu, Yue Huang, and Xinghao Ding
- Abstract summary: We propose a rank learning guided no-reference quality assessment method for underwater image enhancement (UIE)
Our approach, termed Twice Mixing, is motivated by the observation that a mid-quality image can be generated by mixing a high-quality image with its low-quality version.
We conduct extensive experiments on both synthetic and real-world datasets.
- Score: 42.03072878219206
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To improve the quality of underwater images, various kinds of underwater
image enhancement (UIE) operators have been proposed during the past few years.
However, the lack of effective objective evaluation methods limits the further
development of UIE techniques. In this paper, we propose a novel rank learning
guided no-reference quality assessment method for UIE. Our approach, termed
Twice Mixing, is motivated by the observation that a mid-quality image can be
generated by mixing a high-quality image with its low-quality version. Typical
mixup algorithms linearly interpolate a given pair of input data. However, the
human visual system is non-uniformity and non-linear in processing images.
Therefore, instead of directly training a deep neural network based on the
mixed images and their absolute scores calculated by linear combinations, we
propose to train a Siamese Network to learn their quality rankings. Twice
Mixing is trained based on an elaborately formulated self-supervision
mechanism. Specifically, before each iteration, we randomly generate two mixing
ratios which will be employed for both generating virtual images and guiding
the network training. In the test phase, a single branch of the network is
extracted to predict the quality rankings of different UIE outputs. We conduct
extensive experiments on both synthetic and real-world datasets. Experimental
results demonstrate that our approach outperforms the previous methods
significantly.
Related papers
- Enhance Image Classification via Inter-Class Image Mixup with Diffusion Model [80.61157097223058]
A prevalent strategy to bolster image classification performance is through augmenting the training set with synthetic images generated by T2I models.
In this study, we scrutinize the shortcomings of both current generative and conventional data augmentation techniques.
We introduce an innovative inter-class data augmentation method known as Diff-Mix, which enriches the dataset by performing image translations between classes.
arXiv Detail & Related papers (2024-03-28T17:23:45Z) - Re-IQA: Unsupervised Learning for Image Quality Assessment in the Wild [38.197794061203055]
We propose a Mixture of Experts approach to train two separate encoders to learn high-level content and low-level image quality features in an unsupervised setting.
We deploy the complementary low and high-level image representations obtained from the Re-IQA framework to train a linear regression model.
Our method achieves state-of-the-art performance on multiple large-scale image quality assessment databases.
arXiv Detail & Related papers (2023-04-02T05:06:51Z) - Adaptive Uncertainty Distribution in Deep Learning for Unsupervised
Underwater Image Enhancement [1.9249287163937976]
One of the main challenges in deep learning-based underwater image enhancement is the limited availability of high-quality training data.
We propose a novel unsupervised underwater image enhancement framework that employs a conditional variational autoencoder (cVAE) to train a deep learning model.
We show that our proposed framework yields competitive performance compared to other state-of-the-art approaches in quantitative as well as qualitative metrics.
arXiv Detail & Related papers (2022-12-18T01:07:20Z) - Co-training $2^L$ Submodels for Visual Recognition [67.02999567435626]
Submodel co-training is a regularization method related to co-training, self-distillation and depth.
We show that submodel co-training is effective to train backbones for recognition tasks such as image classification and semantic segmentation.
arXiv Detail & Related papers (2022-12-09T14:38:09Z) - Image Quality Assessment with Gradient Siamese Network [8.958447396656581]
We introduce Gradient Siamese Network (GSN) for image quality assessment.
We utilize Central Differential Convolution to obtain both semantic features and detail difference hidden in image pair.
For the low-level, mid-level and high-level features extracted by the network, we innovatively design a multi-level fusion method.
arXiv Detail & Related papers (2022-08-08T12:10:38Z) - Mix-up Self-Supervised Learning for Contrast-agnostic Applications [33.807005669824136]
We present the first mix-up self-supervised learning framework for contrast-agnostic applications.
We address the low variance across images based on cross-domain mix-up and build the pretext task based on image reconstruction and transparency prediction.
arXiv Detail & Related papers (2022-04-02T16:58:36Z) - Enhanced Performance of Pre-Trained Networks by Matched Augmentation
Distributions [10.74023489125222]
We propose a simple solution to address the train-test distributional shift.
We combine results for multiple random crops for a test image.
This not only matches the train time augmentation but also provides the full coverage of the input image.
arXiv Detail & Related papers (2022-01-19T22:33:00Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z) - Two-shot Spatially-varying BRDF and Shape Estimation [89.29020624201708]
We propose a novel deep learning architecture with a stage-wise estimation of shape and SVBRDF.
We create a large-scale synthetic training dataset with domain-randomized geometry and realistic materials.
Experiments on both synthetic and real-world datasets show that our network trained on a synthetic dataset can generalize well to real-world images.
arXiv Detail & Related papers (2020-04-01T12:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.