Perceptual Quality Assessment of NeRF and Neural View Synthesis Methods
for Front-Facing Views
- URL: http://arxiv.org/abs/2303.15206v3
- Date: Tue, 24 Oct 2023 14:30:03 GMT
- Title: Perceptual Quality Assessment of NeRF and Neural View Synthesis Methods
for Front-Facing Views
- Authors: Hanxue Liang, Tianhao Wu, Param Hanji, Francesco Banterle, Hongyun
Gao, Rafal Mantiuk, Cengiz Oztireli
- Abstract summary: We present the first study on perceptual evaluation of NVS and NeRF variants.
We measured the quality of videos synthesized by several NVS methods in a well-controlled perceptual quality assessment experiment.
- Score: 10.565297375544414
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Neural view synthesis (NVS) is one of the most successful techniques for
synthesizing free viewpoint videos, capable of achieving high fidelity from
only a sparse set of captured images. This success has led to many variants of
the techniques, each evaluated on a set of test views typically using image
quality metrics such as PSNR, SSIM, or LPIPS. There has been a lack of research
on how NVS methods perform with respect to perceived video quality. We present
the first study on perceptual evaluation of NVS and NeRF variants. For this
study, we collected two datasets of scenes captured in a controlled lab
environment as well as in-the-wild. In contrast to existing datasets, these
scenes come with reference video sequences, allowing us to test for temporal
artifacts and subtle distortions that are easily overlooked when viewing only
static images. We measured the quality of videos synthesized by several NVS
methods in a well-controlled perceptual quality assessment experiment as well
as with many existing state-of-the-art image/video quality metrics. We present
a detailed analysis of the results and recommendations for dataset and metric
selection for NVS evaluation.
Related papers
- GS-QA: Comprehensive Quality Assessment Benchmark for Gaussian Splatting View Synthesis [4.117347527143616]
Gaussian Splatting (GS) offers a promising alternative to Neural Radiance Fields (NeRF) for real-time 3D scene rendering.
GS achieves faster rendering times and reduced memory consumption compared to the neural network approach used in NeRF.
This paper describes a subjective quality assessment study that aims to evaluate synthesized videos obtained with several static GS methods.
arXiv Detail & Related papers (2025-02-18T17:46:57Z) - Evaluating Human Perception of Novel View Synthesis: Subjective Quality Assessment of Gaussian Splatting and NeRF in Dynamic Scenes [6.157597876333952]
We conduct two subjective experiments for the quality assessment of NVS technologies containing both GS-based and NeRF-based methods.
This study covers 360deg, front-facing, and single-viewpoint photorealistic videos while providing a richer and greater number of real scenes.
It's the first time to explore the impact of NVS methods in dynamic scenes with moving objects.
arXiv Detail & Related papers (2025-01-13T10:01:27Z) - NVS-SQA: Exploring Self-Supervised Quality Representation Learning for Neurally Synthesized Scenes without References [55.35182166250742]
We propose NVS-SQA, a quality assessment method to learn no-reference quality representations through self-supervision.
Traditional self-supervised learning predominantly relies on the "same instance, similar representation" assumption and extensive datasets.
We employ photorealistic cues and quality scores as learning objectives, along with a specialized contrastive pair preparation process to improve the effectiveness and efficiency of learning.
arXiv Detail & Related papers (2025-01-11T09:12:43Z) - NeRF-NQA: No-Reference Quality Assessment for Scenes Generated by NeRF and Neural View Synthesis Methods [13.403739247879766]
We propose NeRF-NQA, the first no-reference quality assessment method for densely-observed scenes synthesized from the NVS and NeRF variants.
NeRF-NQA employs a joint quality assessment strategy, integrating both viewwise and pointwise approaches.
The viewwise approach assesses the spatial quality of each individual synthesized view and the overall inter-views consistency, while the pointwise approach focuses on the angular qualities of scene surface points.
arXiv Detail & Related papers (2024-12-11T02:17:33Z) - Analysis and Benchmarking of Extending Blind Face Image Restoration to Videos [99.42805906884499]
We first introduce a Real-world Low-Quality Face Video benchmark (RFV-LQ) to evaluate leading image-based face restoration algorithms.
We then conduct a thorough systematical analysis of the benefits and challenges associated with extending blind face image restoration algorithms to degraded face videos.
Our analysis identifies several key issues, primarily categorized into two aspects: significant jitters in facial components and noise-shape flickering between frames.
arXiv Detail & Related papers (2024-10-15T17:53:25Z) - Diffusion Priors for Dynamic View Synthesis from Monocular Videos [59.42406064983643]
Dynamic novel view synthesis aims to capture the temporal evolution of visual content within videos.
We first finetune a pretrained RGB-D diffusion model on the video frames using a customization technique.
We distill the knowledge from the finetuned model to a 4D representations encompassing both dynamic and static Neural Radiance Fields.
arXiv Detail & Related papers (2024-01-10T23:26:41Z) - Novel View Synthesis with View-Dependent Effects from a Single Image [35.85973300177698]
We first consider view-dependent effects into single image-based novel view synthesis (NVS) problems.
We propose to exploit the camera motion priors in NVS to model view-dependent appearance or effects (VDE) as the negative disparity in the scene.
We present extensive experiment results and show that our proposed method can learn NVS with VDEs, outperforming the SOTA single-view NVS methods on the RealEstate10k and MannequinChallenge datasets.
arXiv Detail & Related papers (2023-12-13T11:29:47Z) - Evaluating Point Cloud from Moving Camera Videos: A No-Reference Metric [58.309735075960745]
This paper explores the way of dealing with point cloud quality assessment (PCQA) tasks via video quality assessment (VQA) methods.
We generate the captured videos by rotating the camera around the point clouds through several circular pathways.
We extract both spatial and temporal quality-aware features from the selected key frames and the video clips through using trainable 2D-CNN and pre-trained 3D-CNN models.
arXiv Detail & Related papers (2022-08-30T08:59:41Z) - CONVIQT: Contrastive Video Quality Estimator [63.749184706461826]
Perceptual video quality assessment (VQA) is an integral component of many streaming and video sharing platforms.
Here we consider the problem of learning perceptually relevant video quality representations in a self-supervised manner.
Our results indicate that compelling representations with perceptual bearing can be obtained using self-supervised learning.
arXiv Detail & Related papers (2022-06-29T15:22:01Z) - A No-reference Quality Assessment Metric for Point Cloud Based on
Captured Video Sequences [40.46566408312466]
We propose a no-reference quality assessment metric for colored point cloud based on captured video sequences.
The experimental results show that our method outperforms most of the state-of-the-art full-reference and no-reference PCQA metrics.
arXiv Detail & Related papers (2022-06-09T06:42:41Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.