Rank-smoothed Pairwise Learning In Perceptual Quality Assessment
- URL: http://arxiv.org/abs/2011.10893v1
- Date: Sat, 21 Nov 2020 23:33:14 GMT
- Title: Rank-smoothed Pairwise Learning In Perceptual Quality Assessment
- Authors: Hossein Talebi, Ehsan Amid, Peyman Milanfar, and Manfred K. Warmuth
- Abstract summary: We show that regularizing pairwise empirical probabilities with aggregated rankwise probabilities leads to a more reliable training loss.
We show that training a deep image quality assessment model with our rank-smoothed loss consistently improves the accuracy of predicting human preferences.
- Score: 26.599014990168836
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conducting pairwise comparisons is a widely used approach in curating human
perceptual preference data. Typically raters are instructed to make their
choices according to a specific set of rules that address certain dimensions of
image quality and aesthetics. The outcome of this process is a dataset of
sampled image pairs with their associated empirical preference probabilities.
Training a model on these pairwise preferences is a common deep learning
approach. However, optimizing by gradient descent through mini-batch learning
means that the "global" ranking of the images is not explicitly taken into
account. In other words, each step of the gradient descent relies only on a
limited number of pairwise comparisons. In this work, we demonstrate that
regularizing the pairwise empirical probabilities with aggregated rankwise
probabilities leads to a more reliable training loss. We show that training a
deep image quality assessment model with our rank-smoothed loss consistently
improves the accuracy of predicting human preferences.
Related papers
- Beyond MOS: Subjective Image Quality Score Preprocessing Method Based on Perceptual Similarity [2.290956583394892]
ITU-R BT.500, ITU-T P.910, and ITU-T P.913 have been standardized to clean up the original opinion scores.
PSP exploit the perceptual similarity between images to alleviate subjective bias in less annotated scenarios.
arXiv Detail & Related papers (2024-04-30T16:01:14Z) - Classes Are Not Equal: An Empirical Study on Image Recognition Fairness [100.36114135663836]
We experimentally demonstrate that classes are not equal and the fairness issue is prevalent for image classification models across various datasets.
Our findings reveal that models tend to exhibit greater prediction biases for classes that are more challenging to recognize.
Data augmentation and representation learning algorithms improve overall performance by promoting fairness to some degree in image classification.
arXiv Detail & Related papers (2024-02-28T07:54:50Z) - Content-Diverse Comparisons improve IQA [23.523537785599913]
Image quality assessment (IQA) forms a natural and often straightforward undertaking for humans, yet effective automation of the task remains challenging.
Recent metrics from the deep learning community commonly compare image pairs during training to improve upon traditional metrics such as PSNR or SSIM.
This restricts the diversity and number of image pairs that the model is exposed to during training.
In this paper, we strive to enrich these comparisons with content diversity. Firstly, we relax comparison constraints, and compare pairs of images with differing content. This increases the variety of available comparisons.
arXiv Detail & Related papers (2022-11-09T21:53:13Z) - Understanding the Generalization of Adam in Learning Neural Networks
with Proper Regularization [118.50301177912381]
We show that Adam can converge to different solutions of the objective with provably different errors, even with weight decay globalization.
We show that if convex, and the weight decay regularization is employed, any optimization algorithms including Adam will converge to the same solution.
arXiv Detail & Related papers (2021-08-25T17:58:21Z) - Provable Guarantees for Self-Supervised Deep Learning with Spectral
Contrastive Loss [72.62029620566925]
Recent works in self-supervised learning have advanced the state-of-the-art by relying on the contrastive learning paradigm.
Our work analyzes contrastive learning without assuming conditional independence of positive pairs.
We propose a loss that performs spectral decomposition on the population augmentation graph and can be succinctly written as a contrastive learning objective.
arXiv Detail & Related papers (2021-06-08T07:41:02Z) - Deep Matching Prior: Test-Time Optimization for Dense Correspondence [37.492074298574664]
We show that an image pair-specific prior can be captured by solely optimizing the untrained matching networks on an input pair of images.
Experiments demonstrate that our framework, dubbed Deep Matching Prior (DMP), is competitive, or even outperforms, against the latest learning-based methods.
arXiv Detail & Related papers (2021-06-06T10:56:01Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - An Empirical Study of the Collapsing Problem in Semi-Supervised 2D Human
Pose Estimation [80.02124918255059]
Semi-supervised learning aims to boost the accuracy of a model by exploring unlabeled images.
We learn two networks to mutually teach each other.
The more reliable predictions on easy images in each network are used to teach the other network to learn about the corresponding hard images.
arXiv Detail & Related papers (2020-11-25T03:29:52Z) - A Flatter Loss for Bias Mitigation in Cross-dataset Facial Age
Estimation [37.107335288543624]
We advocate a cross-dataset protocol for age estimation benchmarking.
We propose a novel loss function that is more effective for neural network training.
arXiv Detail & Related papers (2020-10-20T15:22:29Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.