FUNQUE: Fusion of Unified Quality Evaluators
- URL: http://arxiv.org/abs/2202.11241v1
- Date: Wed, 23 Feb 2022 00:21:43 GMT
- Title: FUNQUE: Fusion of Unified Quality Evaluators
- Authors: Abhinau K. Venkataramanan, Cosmin Stejerean and Alan C. Bovik
- Abstract summary: Fusion-based quality assessment has emerged as a powerful method for developing high-performance quality models.
We propose FUNQUE, a quality model that fuses unified quality evaluators.
- Score: 42.41484412777326
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fusion-based quality assessment has emerged as a powerful method for
developing high-performance quality models from quality models that
individually achieve lower performances. A prominent example of such an
algorithm is VMAF, which has been widely adopted as an industry standard for
video quality prediction along with SSIM. In addition to advancing the
state-of-the-art, it is imperative to alleviate the computational burden
presented by the use of a heterogeneous set of quality models. In this paper,
we unify "atom" quality models by computing them on a common transform domain
that accounts for the Human Visual System, and we propose FUNQUE, a quality
model that fuses unified quality evaluators. We demonstrate that in comparison
to the state-of-the-art, FUNQUE offers significant improvements in both
correlation against subjective scores and efficiency, due to computation
sharing.
Related papers
- Q-Ground: Image Quality Grounding with Large Multi-modality Models [61.72022069880346]
We introduce Q-Ground, the first framework aimed at tackling fine-scale visual quality grounding.
Q-Ground combines large multi-modality models with detailed visual quality analysis.
Central to our contribution is the introduction of the QGround-100K dataset.
arXiv Detail & Related papers (2024-07-24T06:42:46Z) - Opinion-Unaware Blind Image Quality Assessment using Multi-Scale Deep Feature Statistics [54.08757792080732]
We propose integrating deep features from pre-trained visual models with a statistical analysis model to achieve opinion-unaware BIQA (OU-BIQA)
Our proposed model exhibits superior consistency with human visual perception compared to state-of-the-art BIQA models.
arXiv Detail & Related papers (2024-05-29T06:09:34Z) - Context Quality Matters in Training Fusion-in-Decoder for Extractive Open-Domain Question Answering [5.259846811078731]
Retrieval-augmented generation models augment knowledge encoded in a language model by providing additional relevant external knowledge (context) during generation.
This paper explores how context quantity and quality during model training affect the performance of Fusion-in-Decoder (FiD), the state-of-the-art retrieval-augmented generation model.
arXiv Detail & Related papers (2024-03-21T07:47:57Z) - Uncertainty-aware No-Reference Point Cloud Quality Assessment [25.543217625958462]
This work presents the first probabilistic architecture for no-reference point cloud quality assessment (PCQA)
The proposed method can model the quality judgingity of subjects through a tailored conditional variational autoencoder (AE)
Experiments indicate that our approach mimics previous cutting-edge methods by a large margin and exhibits cross-dataset experiments.
arXiv Detail & Related papers (2024-01-17T02:25:42Z) - QualEval: Qualitative Evaluation for Model Improvement [82.73561470966658]
We propose QualEval, which augments quantitative scalar metrics with automated qualitative evaluation as a vehicle for model improvement.
QualEval uses a powerful LLM reasoner and our novel flexible linear programming solver to generate human-readable insights.
We demonstrate that leveraging its insights, for example, improves the absolute performance of the Llama 2 model by up to 15% points relative.
arXiv Detail & Related papers (2023-11-06T00:21:44Z) - Curiously Effective Features for Image Quality Prediction [8.55016170630223]
We show that besides the quality of feature extractors also their quantity plays a crucial role.
We analyze this curious result and show that besides the quality of feature extractors also their quantity plays a crucial role.
arXiv Detail & Related papers (2021-06-10T17:44:04Z) - How Faithful is your Synthetic Data? Sample-level Metrics for Evaluating
and Auditing Generative Models [95.8037674226622]
We introduce a 3-dimensional evaluation metric that characterizes the fidelity, diversity and generalization performance of any generative model in a domain-agnostic fashion.
Our metric unifies statistical divergence measures with precision-recall analysis, enabling sample- and distribution-level diagnoses of model fidelity and diversity.
arXiv Detail & Related papers (2021-02-17T18:25:30Z) - Study on the Assessment of the Quality of Experience of Streaming Video [117.44028458220427]
In this paper, the influence of various objective factors on the subjective estimation of the QoE of streaming video is studied.
The paper presents standard and handcrafted features, shows their correlation and p-Value of significance.
We take SQoE-III database, so far the largest and most realistic of its kind.
arXiv Detail & Related papers (2020-12-08T18:46:09Z) - Generating Adversarial Examples with an Optimized Quality [12.747258403133035]
Deep learning models are vulnerable to Adversarial Examples (AEs),carefully crafted samples to deceive those models.
Recent studies have introduced new adversarial attack methods, but none provided guaranteed quality for the crafted examples.
In this paper, we incorporateImage Quality Assessment (IQA) metrics into the design and generation process of AEs.
arXiv Detail & Related papers (2020-06-30T23:05:12Z) - Comparison of Image Quality Models for Optimization of Image Processing
Systems [41.57409136781606]
We use eleven full-reference IQA models to train deep neural networks for four low-level vision tasks.
Subjective testing on the optimized images allows us to rank the competing models in terms of their perceptual performance.
arXiv Detail & Related papers (2020-05-04T09:26:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.