End-to-end deep multi-score model for No-reference stereoscopic image
quality assessment
- URL: http://arxiv.org/abs/2211.01374v1
- Date: Wed, 2 Nov 2022 16:45:35 GMT
- Title: End-to-end deep multi-score model for No-reference stereoscopic image
quality assessment
- Authors: Oussama Messai, Aladine Chetouani
- Abstract summary: We use a deep multi-score Convolutional Neural Network (CNN) to estimate stereoscopic image quality without reference.
Our model has been trained to perform four tasks: First, predict the left view's quality. Second, predict the quality of the left view. Third and fourth, predict the quality of the stereo view and global quality, respectively, with the global score serving as the ultimate quality.
- Score: 6.254148286968409
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning-based quality metrics have recently given significant
improvement in Image Quality Assessment (IQA). In the field of stereoscopic
vision, information is evenly distributed with slight disparity to the left and
right eyes. However, due to asymmetric distortion, the objective quality
ratings for the left and right images would differ, necessitating the learning
of unique quality indicators for each view. Unlike existing stereoscopic IQA
measures which focus mainly on estimating a global human score, we suggest
incorporating left, right, and stereoscopic objective scores to extract the
corresponding properties of each view, and so forth estimating stereoscopic
image quality without reference. Therefore, we use a deep multi-score
Convolutional Neural Network (CNN). Our model has been trained to perform four
tasks: First, predict the left view's quality. Second, predict the quality of
the left view. Third and fourth, predict the quality of the stereo view and
global quality, respectively, with the global score serving as the ultimate
quality. Experiments are conducted on Waterloo IVC 3D Phase 1 and Phase 2
databases. The results obtained show the superiority of our method when
comparing with those of the state-of-the-art. The implementation code can be
found at: https://github.com/o-messai/multi-score-SIQA
Related papers
- Perceptual Depth Quality Assessment of Stereoscopic Omnidirectional Images [10.382801621282228]
We develop an objective quality assessment model named depth quality index (DQI) for efficient no-reference (NR) depth quality assessment of stereoscopic omnidirectional images.
Motivated by the perceptual characteristics of the human visual system (HVS), the proposed DQI is built upon multi-color-channel, adaptive viewport selection, and interocular discrepancy features.
arXiv Detail & Related papers (2024-08-19T16:28:05Z) - Q-Ground: Image Quality Grounding with Large Multi-modality Models [61.72022069880346]
We introduce Q-Ground, the first framework aimed at tackling fine-scale visual quality grounding.
Q-Ground combines large multi-modality models with detailed visual quality analysis.
Central to our contribution is the introduction of the QGround-100K dataset.
arXiv Detail & Related papers (2024-07-24T06:42:46Z) - Dual-Branch Network for Portrait Image Quality Assessment [76.27716058987251]
We introduce a dual-branch network for portrait image quality assessment (PIQA)
We utilize two backbone networks (textiti.e., Swin Transformer-B) to extract the quality-aware features from the entire portrait image and the facial image cropped from it.
We leverage LIQE, an image scene classification and quality assessment model, to capture the quality-aware and scene-specific features as the auxiliary features.
arXiv Detail & Related papers (2024-05-14T12:43:43Z) - Learning Generalizable Perceptual Representations for Data-Efficient
No-Reference Image Quality Assessment [7.291687946822539]
A major drawback of state-of-the-art NR-IQA techniques is their reliance on a large number of human annotations.
We enable the learning of low-level quality features to distortion types by introducing a novel quality-aware contrastive loss.
We design zero-shot quality predictions from both pathways in a completely blind setting.
arXiv Detail & Related papers (2023-12-08T05:24:21Z) - Assessor360: Multi-sequence Network for Blind Omnidirectional Image
Quality Assessment [50.82681686110528]
Blind Omnidirectional Image Quality Assessment (BOIQA) aims to objectively assess the human perceptual quality of omnidirectional images (ODIs)
The quality assessment of ODIs is severely hampered by the fact that the existing BOIQA pipeline lacks the modeling of the observer's browsing process.
We propose a novel multi-sequence network for BOIQA called Assessor360, which is derived from the realistic multi-assessor ODI quality assessment procedure.
arXiv Detail & Related papers (2023-05-18T13:55:28Z) - Blind Image Quality Assessment via Vision-Language Correspondence: A
Multitask Learning Perspective [93.56647950778357]
Blind image quality assessment (BIQA) predicts the human perception of image quality without any reference information.
We develop a general and automated multitask learning scheme for BIQA to exploit auxiliary knowledge from other tasks.
arXiv Detail & Related papers (2023-03-27T07:58:09Z) - Blind Multimodal Quality Assessment: A Brief Survey and A Case Study of
Low-light Images [73.27643795557778]
Blind image quality assessment (BIQA) aims at automatically and accurately forecasting objective scores for visual signals.
Recent developments in this field are dominated by unimodal solutions inconsistent with human subjective rating patterns.
We present a unique blind multimodal quality assessment (BMQA) of low-light images from subjective evaluation to objective score.
arXiv Detail & Related papers (2023-03-18T09:04:55Z) - ST360IQ: No-Reference Omnidirectional Image Quality Assessment with
Spherical Vision Transformers [17.48330099000856]
We present a method for no-reference 360 image quality assessment.
Our approach predicts the quality of an omnidirectional image correlated with the human-perceived image quality.
arXiv Detail & Related papers (2023-03-13T07:48:46Z) - Half of an image is enough for quality assessment [17.681369126678465]
We develop a positional masked transformer for image quality assessment (IQA)
We observe that half of an image might contribute trivially to image quality, whereas the other half is crucial.
Such observation is generalized to that half of the image regions can dominate image quality in several CNN-based IQA models.
arXiv Detail & Related papers (2023-01-30T13:52:22Z) - MANIQA: Multi-dimension Attention Network for No-Reference Image Quality
Assessment [18.637040004248796]
No-Reference Image Quality Assessment (NR-IQA) aims to assess the perceptual quality of images in accordance with human subjective perception.
Existing NR-IQA methods are far from meeting the needs of predicting accurate quality scores on GAN-based distortion images.
We propose Multi-dimension Attention Network for no-reference Image Quality Assessment (MANIQA) to improve the performance on GAN-based distortion.
arXiv Detail & Related papers (2022-04-19T15:56:43Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.