Incorporating Semi-Supervised and Positive-Unlabeled Learning for
Boosting Full Reference Image Quality Assessment
- URL: http://arxiv.org/abs/2204.08763v1
- Date: Tue, 19 Apr 2022 09:10:06 GMT
- Title: Incorporating Semi-Supervised and Positive-Unlabeled Learning for
Boosting Full Reference Image Quality Assessment
- Authors: Yue Cao and Zhaolin Wan and Dongwei Ren and Zifei Yan and Wangmeng Zuo
- Abstract summary: Full-reference (FR) image quality assessment (IQA) evaluates the visual quality of a distorted image by measuring its perceptual difference with pristine-quality reference.
Unlabeled data can be easily collected from an image degradation or restoration process, making it encouraging to exploit unlabeled training data to boost FR-IQA performance.
In this paper, we suggest to incorporate semi-supervised and positive-unlabeled (PU) learning for exploiting unlabeled data while mitigating the adverse effect of outliers.
- Score: 73.61888777504377
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Full-reference (FR) image quality assessment (IQA) evaluates the visual
quality of a distorted image by measuring its perceptual difference with
pristine-quality reference, and has been widely used in low-level vision tasks.
Pairwise labeled data with mean opinion score (MOS) are required in training
FR-IQA model, but is time-consuming and cumbersome to collect. In contrast,
unlabeled data can be easily collected from an image degradation or restoration
process, making it encouraging to exploit unlabeled training data to boost
FR-IQA performance. Moreover, due to the distribution inconsistency between
labeled and unlabeled data, outliers may occur in unlabeled data, further
increasing the training difficulty. In this paper, we suggest to incorporate
semi-supervised and positive-unlabeled (PU) learning for exploiting unlabeled
data while mitigating the adverse effect of outliers. Particularly, by treating
all labeled data as positive samples, PU learning is leveraged to identify
negative samples (i.e., outliers) from unlabeled data. Semi-supervised learning
(SSL) is further deployed to exploit positive unlabeled data by dynamically
generating pseudo-MOS. We adopt a dual-branch network including reference and
distortion branches. Furthermore, spatial attention is introduced in the
reference branch to concentrate more on the informative regions, and sliced
Wasserstein distance is used for robust difference map computation to address
the misalignment issues caused by images recovered by GAN models. Extensive
experiments show that our method performs favorably against state-of-the-arts
on the benchmark datasets PIPAL, KADID-10k, TID2013, LIVE and CSIQ.
Related papers
- Safe Semi-Supervised Contrastive Learning Using In-Distribution Data as Positive Examples [3.4546761246181696]
We propose a self-supervised contrastive learning approach to fully exploit a large amount of unlabeled data.
The results show that self-supervised contrastive learning significantly improves classification accuracy.
arXiv Detail & Related papers (2024-08-03T22:33:13Z) - CLAF: Contrastive Learning with Augmented Features for Imbalanced
Semi-Supervised Learning [40.5117833362268]
Semi-supervised learning and contrastive learning have been progressively combined to achieve better performances in popular applications.
One common manner is assigning pseudo-labels to unlabeled samples and selecting positive and negative samples from pseudo-labeled samples to apply contrastive learning.
We propose Contrastive Learning with Augmented Features (CLAF) to alleviate the scarcity of minority class samples in contrastive learning.
arXiv Detail & Related papers (2023-12-15T08:27:52Z) - Class Prior-Free Positive-Unlabeled Learning with Taylor Variational
Loss for Hyperspectral Remote Sensing Imagery [12.54504113062557]
Positive-unlabeled learning (PU learning) in hyperspectral remote sensing imagery (HSI) is aimed at learning a binary classifier from positive and unlabeled data.
In this paper, a Taylor variational loss is proposed for HSI PU learning, which reduces the weight of the gradient of the unlabeled data.
Experiments on 7 benchmark datasets (21 tasks in total) validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2023-08-29T07:29:30Z) - Soft Curriculum for Learning Conditional GANs with Noisy-Labeled and
Uncurated Unlabeled Data [70.25049762295193]
We introduce a novel conditional image generation framework that accepts noisy-labeled and uncurated data during training.
We propose soft curriculum learning, which assigns instance-wise weights for adversarial training while assigning new labels for unlabeled data.
Our experiments show that our approach outperforms existing semi-supervised and label-noise robust methods in terms of both quantitative and qualitative performance.
arXiv Detail & Related papers (2023-07-17T08:31:59Z) - SoftMatch: Addressing the Quantity-Quality Trade-off in Semi-supervised
Learning [101.86916775218403]
This paper revisits the popular pseudo-labeling methods via a unified sample weighting formulation.
We propose SoftMatch to overcome the trade-off by maintaining both high quantity and high quality of pseudo-labels during training.
In experiments, SoftMatch shows substantial improvements across a wide variety of benchmarks, including image, text, and imbalanced classification.
arXiv Detail & Related papers (2023-01-26T03:53:25Z) - Boosting Facial Expression Recognition by A Semi-Supervised Progressive
Teacher [54.50747989860957]
We propose a semi-supervised learning algorithm named Progressive Teacher (PT) to utilize reliable FER datasets as well as large-scale unlabeled expression images for effective training.
Experiments on widely-used databases RAF-DB and FERPlus validate the effectiveness of our method, which achieves state-of-the-art performance with accuracy of 89.57% on RAF-DB.
arXiv Detail & Related papers (2022-05-28T07:47:53Z) - Negative Data Augmentation [127.28042046152954]
We show that negative data augmentation samples provide information on the support of the data distribution.
We introduce a new GAN training objective where we use NDA as an additional source of synthetic data for the discriminator.
Empirically, models trained with our method achieve improved conditional/unconditional image generation along with improved anomaly detection capabilities.
arXiv Detail & Related papers (2021-02-09T20:28:35Z) - Exploiting Sample Uncertainty for Domain Adaptive Person
Re-Identification [137.9939571408506]
We estimate and exploit the credibility of the assigned pseudo-label of each sample to alleviate the influence of noisy labels.
Our uncertainty-guided optimization brings significant improvement and achieves the state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2020-12-16T04:09:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.