Object-QA: Towards High Reliable Object Quality Assessment
- URL: http://arxiv.org/abs/2005.13116v1
- Date: Wed, 27 May 2020 01:46:58 GMT
- Title: Object-QA: Towards High Reliable Object Quality Assessment
- Authors: Jing Lu, Baorui Zou, Zhanzhan Cheng, Shiliang Pu, Shuigeng Zhou, Yi
Niu, Fei Wu
- Abstract summary: In object recognition applications, object images usually appear with different quality levels.
We propose an effective approach named Object-QA to assess high-reliable quality scores for object images.
- Score: 71.71188284059203
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In object recognition applications, object images usually appear with
different quality levels. Practically, it is very important to indicate object
image qualities for better application performance, e.g. filtering out
low-quality object image frames to maintain robust video object recognition
results and speed up inference. However, no previous works are explicitly
proposed for addressing the problem. In this paper, we define the problem of
object quality assessment for the first time and propose an effective approach
named Object-QA to assess high-reliable quality scores for object images.
Concretely, Object-QA first employs a well-designed relative quality assessing
module that learns the intra-class-level quality scores by referring to the
difference between object images and their estimated templates. Then an
absolute quality assessing module is designed to generate the final quality
scores by aligning the quality score distributions in inter-class. Besides,
Object-QA can be implemented with only object-level annotations, and is also
easily deployed to a variety of object recognition tasks. To our best knowledge
this is the first work to put forward the definition of this problem and
conduct quantitative evaluations. Validations on 5 different datasets show that
Object-QA can not only assess high-reliable quality scores according with human
cognition, but also improve application performance.
Related papers
- Q-Ground: Image Quality Grounding with Large Multi-modality Models [61.72022069880346]
We introduce Q-Ground, the first framework aimed at tackling fine-scale visual quality grounding.
Q-Ground combines large multi-modality models with detailed visual quality analysis.
Central to our contribution is the introduction of the QGround-100K dataset.
arXiv Detail & Related papers (2024-07-24T06:42:46Z) - Vision Language Modeling of Content, Distortion and Appearance for Image Quality Assessment [20.851102845794244]
Distilling high level knowledge about quality bearing attributes is crucial for developing objective Image Quality Assessment (IQA)
We present a new blind IQA (BIQA) model termed Self-supervision and Vision-Language supervision Image QUality Evaluator (SLIQUE)
SLIQUE features a joint vision-language and visual contrastive representation learning framework for acquiring high level knowledge about the images semantic contents, distortion characteristics and appearance properties for IQA.
arXiv Detail & Related papers (2024-06-14T09:18:28Z) - UniQA: Unified Vision-Language Pre-training for Image Quality and Aesthetic Assessment [23.48816491333345]
Image Quality Assessment (IQA) and Image Aesthetic Assessment (IAA) aim to simulate human subjective perception of image visual quality and aesthetic appeal.
Existing methods typically address these tasks independently due to distinct learning objectives.
We propose Unified vision-language pre-training of Quality and Aesthetics (UniQA) to learn general perceptions of two tasks, thereby benefiting them simultaneously.
arXiv Detail & Related papers (2024-06-03T07:40:10Z) - Descriptive Image Quality Assessment in the Wild [25.503311093471076]
VLM-based Image Quality Assessment (IQA) seeks to describe image quality linguistically to align with human expression.
We introduce Depicted image Quality Assessment in the Wild (DepictQA-Wild)
Our method includes a multi-functional IQA task paradigm that encompasses both assessment and comparison tasks, brief and detailed responses, full-reference and non-reference scenarios.
arXiv Detail & Related papers (2024-05-29T07:49:15Z) - Counterfactual Reasoning for Multi-Label Image Classification via Patching-Based Training [84.95281245784348]
Overemphasizing co-occurrence relationships can cause the overfitting issue of the model.
We provide a causal inference framework to show that the correlative features caused by the target object and its co-occurring objects can be regarded as a mediator.
arXiv Detail & Related papers (2024-04-09T13:13:24Z) - Blind Multimodal Quality Assessment: A Brief Survey and A Case Study of
Low-light Images [73.27643795557778]
Blind image quality assessment (BIQA) aims at automatically and accurately forecasting objective scores for visual signals.
Recent developments in this field are dominated by unimodal solutions inconsistent with human subjective rating patterns.
We present a unique blind multimodal quality assessment (BMQA) of low-light images from subjective evaluation to objective score.
arXiv Detail & Related papers (2023-03-18T09:04:55Z) - Gap-closing Matters: Perceptual Quality Evaluation and Optimization of Low-Light Image Enhancement [55.8106019031768]
There is a growing consensus in the research community that the optimization of low-light image enhancement approaches should be guided by the visual quality perceived by end users.
We propose a gap-closing framework for assessing subjective and objective quality systematically.
We validate the effectiveness of our proposed framework through both the accuracy of quality prediction and the perceptual quality of image enhancement.
arXiv Detail & Related papers (2023-02-22T15:57:03Z) - Confusing Image Quality Assessment: Towards Better Augmented Reality
Experience [96.29124666702566]
We consider AR technology as the superimposition of virtual scenes and real scenes, and introduce visual confusion as its basic theory.
A ConFusing Image Quality Assessment (CFIQA) database is established, which includes 600 reference images and 300 distorted images generated by mixing reference images in pairs.
An objective metric termed CFIQA is also proposed to better evaluate the confusing image quality.
arXiv Detail & Related papers (2022-04-11T07:03:06Z) - TISE: A Toolbox for Text-to-Image Synthesis Evaluation [9.092600296992925]
We conduct a study on state-of-the-art methods for single- and multi-object text-to-image synthesis.
We propose a common framework for evaluating these methods.
arXiv Detail & Related papers (2021-12-02T16:39:35Z) - A survey on IQA [0.0]
This article will review the concepts and metrics of image quality assessment and also video quality assessment.
It briefly introduce some methods of full-reference and semi-reference image quality assessment, and focus on the non-reference image quality assessment methods based on deep learning.
arXiv Detail & Related papers (2021-08-29T10:52:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.