Assessor360: Multi-sequence Network for Blind Omnidirectional Image
Quality Assessment
- URL: http://arxiv.org/abs/2305.10983v2
- Date: Wed, 24 May 2023 17:46:50 GMT
- Title: Assessor360: Multi-sequence Network for Blind Omnidirectional Image
Quality Assessment
- Authors: Tianhe Wu, Shuwei Shi, Haoming Cai, Mingdeng Cao, Jing Xiao, Yinqiang
Zheng, Yujiu Yang
- Abstract summary: Blind Omnidirectional Image Quality Assessment (BOIQA) aims to objectively assess the human perceptual quality of omnidirectional images (ODIs)
The quality assessment of ODIs is severely hampered by the fact that the existing BOIQA pipeline lacks the modeling of the observer's browsing process.
We propose a novel multi-sequence network for BOIQA called Assessor360, which is derived from the realistic multi-assessor ODI quality assessment procedure.
- Score: 50.82681686110528
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Blind Omnidirectional Image Quality Assessment (BOIQA) aims to objectively
assess the human perceptual quality of omnidirectional images (ODIs) without
relying on pristine-quality image information. It is becoming more significant
with the increasing advancement of virtual reality (VR) technology. However,
the quality assessment of ODIs is severely hampered by the fact that the
existing BOIQA pipeline lacks the modeling of the observer's browsing process.
To tackle this issue, we propose a novel multi-sequence network for BOIQA
called Assessor360, which is derived from the realistic multi-assessor ODI
quality assessment procedure. Specifically, we propose a generalized Recursive
Probability Sampling (RPS) method for the BOIQA task, combining content and
detailed information to generate multiple pseudo viewport sequences from a
given starting point. Additionally, we design a Multi-scale Feature Aggregation
(MFA) module with Distortion-aware Block (DAB) to fuse distorted and semantic
features of each viewport. We also devise TMM to learn the viewport transition
in the temporal domain. Extensive experimental results demonstrate that
Assessor360 outperforms state-of-the-art methods on multiple OIQA datasets.
Related papers
- A Global Depth-Range-Free Multi-View Stereo Transformer Network with Pose Embedding [76.44979557843367]
We propose a novel multi-view stereo (MVS) framework that gets rid of the depth range prior.
We introduce a Multi-view Disparity Attention (MDA) module to aggregate long-range context information.
We explicitly estimate the quality of the current pixel corresponding to sampled points on the epipolar line of the source image.
arXiv Detail & Related papers (2024-11-04T08:50:16Z) - Perceptual Depth Quality Assessment of Stereoscopic Omnidirectional Images [10.382801621282228]
We develop an objective quality assessment model named depth quality index (DQI) for efficient no-reference (NR) depth quality assessment of stereoscopic omnidirectional images.
Motivated by the perceptual characteristics of the human visual system (HVS), the proposed DQI is built upon multi-color-channel, adaptive viewport selection, and interocular discrepancy features.
arXiv Detail & Related papers (2024-08-19T16:28:05Z) - Opinion-Unaware Blind Image Quality Assessment using Multi-Scale Deep Feature Statistics [54.08757792080732]
We propose integrating deep features from pre-trained visual models with a statistical analysis model to achieve opinion-unaware BIQA (OU-BIQA)
Our proposed model exhibits superior consistency with human visual perception compared to state-of-the-art BIQA models.
arXiv Detail & Related papers (2024-05-29T06:09:34Z) - Adaptive Feature Selection for No-Reference Image Quality Assessment by Mitigating Semantic Noise Sensitivity [55.399230250413986]
We propose a Quality-Aware Feature Matching IQA Metric (QFM-IQM) to remove harmful semantic noise features from the upstream task.
Our approach achieves superior performance to the state-of-the-art NR-IQA methods on eight standard IQA datasets.
arXiv Detail & Related papers (2023-12-11T06:50:27Z) - Blind Image Quality Assessment via Vision-Language Correspondence: A
Multitask Learning Perspective [93.56647950778357]
Blind image quality assessment (BIQA) predicts the human perception of image quality without any reference information.
We develop a general and automated multitask learning scheme for BIQA to exploit auxiliary knowledge from other tasks.
arXiv Detail & Related papers (2023-03-27T07:58:09Z) - Blind Multimodal Quality Assessment: A Brief Survey and A Case Study of
Low-light Images [73.27643795557778]
Blind image quality assessment (BIQA) aims at automatically and accurately forecasting objective scores for visual signals.
Recent developments in this field are dominated by unimodal solutions inconsistent with human subjective rating patterns.
We present a unique blind multimodal quality assessment (BMQA) of low-light images from subjective evaluation to objective score.
arXiv Detail & Related papers (2023-03-18T09:04:55Z) - MANIQA: Multi-dimension Attention Network for No-Reference Image Quality
Assessment [18.637040004248796]
No-Reference Image Quality Assessment (NR-IQA) aims to assess the perceptual quality of images in accordance with human subjective perception.
Existing NR-IQA methods are far from meeting the needs of predicting accurate quality scores on GAN-based distortion images.
We propose Multi-dimension Attention Network for no-reference Image Quality Assessment (MANIQA) to improve the performance on GAN-based distortion.
arXiv Detail & Related papers (2022-04-19T15:56:43Z) - No-Reference Image Quality Assessment via Feature Fusion and Multi-Task
Learning [29.19484863898778]
Blind or no-reference image quality assessment (NR-IQA) is a fundamental, unsolved, and yet challenging problem.
We propose a simple and yet effective general-purpose no-reference (NR) image quality assessment framework based on multi-task learning.
Our model employs distortion types as well as subjective human scores to predict image quality.
arXiv Detail & Related papers (2020-06-06T05:04:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.