Evaluating Point Cloud from Moving Camera Videos: A No-Reference Metric
- URL: http://arxiv.org/abs/2208.14085v3
- Date: Wed, 6 Dec 2023 08:26:50 GMT
- Title: Evaluating Point Cloud from Moving Camera Videos: A No-Reference Metric
- Authors: Zicheng Zhang, Wei Sun, Yucheng Zhu, Xiongkuo Min, Wei Wu, Ying Chen,
and Guangtao Zhai
- Abstract summary: This paper explores the way of dealing with point cloud quality assessment (PCQA) tasks via video quality assessment (VQA) methods.
We generate the captured videos by rotating the camera around the point clouds through several circular pathways.
We extract both spatial and temporal quality-aware features from the selected key frames and the video clips through using trainable 2D-CNN and pre-trained 3D-CNN models.
- Score: 58.309735075960745
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Point cloud is one of the most widely used digital representation formats for
three-dimensional (3D) contents, the visual quality of which may suffer from
noise and geometric shift distortions during the production procedure as well
as compression and downsampling distortions during the transmission process. To
tackle the challenge of point cloud quality assessment (PCQA), many PCQA
methods have been proposed to evaluate the visual quality levels of point
clouds by assessing the rendered static 2D projections. Although such
projection-based PCQA methods achieve competitive performance with the
assistance of mature image quality assessment (IQA) methods, they neglect that
the 3D model is also perceived in a dynamic viewing manner, where the viewpoint
is continually changed according to the feedback of the rendering device.
Therefore, in this paper, we evaluate the point clouds from moving camera
videos and explore the way of dealing with PCQA tasks via using video quality
assessment (VQA) methods. First, we generate the captured videos by rotating
the camera around the point clouds through several circular pathways. Then we
extract both spatial and temporal quality-aware features from the selected key
frames and the video clips through using trainable 2D-CNN and pre-trained
3D-CNN models respectively. Finally, the visual quality of point clouds is
represented by the video quality values. The experimental results reveal that
the proposed method is effective for predicting the visual quality levels of
the point clouds and even competitive with full-reference (FR) PCQA methods.
The ablation studies further verify the rationality of the proposed framework
and confirm the contributions made by the quality-aware features extracted via
the dynamic viewing manner. The code is available at
https://github.com/zzc-1998/VQA_PC.
Related papers
- No-Reference Point Cloud Quality Assessment via Graph Convolutional Network [89.12589881881082]
Three-dimensional (3D) point cloud, as an emerging visual media format, is increasingly favored by consumers.
Point clouds inevitably suffer from quality degradation and information loss through multimedia communication systems.
We propose a novel no-reference PCQA method by using a graph convolutional network (GCN) to characterize the mutual dependencies of multi-view 2D projected image contents.
arXiv Detail & Related papers (2024-11-12T11:39:05Z) - Activating Frequency and ViT for 3D Point Cloud Quality Assessment
without Reference [0.49157446832511503]
We propose no-reference quality metric of a given 3D-PC.
To map the input attributes to quality score, we use a light-weight hybrid deep model; combined of Deformable Convolutional Network (DCN) and Vision Transformers (ViT)
The results show that our approach outperforms state-of-the-art NR-PCQA measures and even some FR-PCQA on PointXR.
arXiv Detail & Related papers (2023-12-10T19:13:34Z) - Simple Baselines for Projection-based Full-reference and No-reference
Point Cloud Quality Assessment [60.2709006613171]
We propose simple baselines for projection-based point cloud quality assessment (PCQA)
We use multi-projections obtained via a common cube-like projection process from the point clouds for both full-reference (FR) and no-reference (NR) PCQA tasks.
Taking part in the ICIP 2023 PCVQA Challenge, we succeeded in achieving the top spot in four out of the five competition tracks.
arXiv Detail & Related papers (2023-10-26T04:42:57Z) - Geometry-Aware Video Quality Assessment for Dynamic Digital Human [56.17852258306602]
We propose a novel no-reference (NR) geometry-aware video quality assessment method for DDH-QA challenge.
The proposed method achieves state-of-the-art performance on the DDH-QA database.
arXiv Detail & Related papers (2023-10-24T16:34:03Z) - MM-PCQA: Multi-Modal Learning for No-reference Point Cloud Quality
Assessment [32.495387943305204]
We propose a novel no-reference point cloud quality assessment (NR-PCQA) metric in a multi-modal fashion.
In specific, we split the point clouds into sub-models to represent local geometry distortions such as point shift and down-sampling.
To achieve the goals, the sub-models and projected images are encoded with point-based and image-based neural networks.
arXiv Detail & Related papers (2022-09-01T06:11:12Z) - A No-reference Quality Assessment Metric for Point Cloud Based on
Captured Video Sequences [40.46566408312466]
We propose a no-reference quality assessment metric for colored point cloud based on captured video sequences.
The experimental results show that our method outperforms most of the state-of-the-art full-reference and no-reference PCQA metrics.
arXiv Detail & Related papers (2022-06-09T06:42:41Z) - Blind VQA on 360{\deg} Video via Progressively Learning from Pixels,
Frames and Video [66.57045901742922]
Blind visual quality assessment (BVQA) on 360textdegree video plays a key role in optimizing immersive multimedia systems.
In this paper, we take into account the progressive paradigm of human perception towards spherical video quality.
We propose a novel BVQA approach (namely ProVQA) for 360textdegree video via progressively learning from pixels, frames and video.
arXiv Detail & Related papers (2021-11-18T03:45:13Z) - Reduced Reference Perceptual Quality Model and Application to Rate
Control for 3D Point Cloud Compression [61.110938359555895]
In rate-distortion optimization, the encoder settings are determined by maximizing a reconstruction quality measure subject to a constraint on the bit rate.
We propose a linear perceptual quality model whose variables are the V-PCC geometry and color quantization parameters.
Subjective quality tests with 400 compressed 3D point clouds show that the proposed model correlates well with the mean opinion score.
We show that for the same target bit rate, ratedistortion optimization based on the proposed model offers higher perceptual quality than rate-distortion optimization based on exhaustive search with a point-to-point objective quality metric.
arXiv Detail & Related papers (2020-11-25T12:42:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.