Blind Quality Assessment of 3D Dense Point Clouds with Structure Guided
Resampling
- URL: http://arxiv.org/abs/2208.14603v1
- Date: Wed, 31 Aug 2022 02:42:55 GMT
- Title: Blind Quality Assessment of 3D Dense Point Clouds with Structure Guided
Resampling
- Authors: Wei Zhou, Qi Yang, Qiuping Jiang, Guangtao Zhai, Weisi Lin
- Abstract summary: We propose an objective point cloud quality index with Structure Guided Resampling (SGR) to automatically evaluate the perceptually visual quality of 3D dense point clouds.
The proposed SGR is a general-purpose blind quality assessment method without the assistance of any reference information.
- Score: 71.68672977990403
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Objective quality assessment of 3D point clouds is essential for the
development of immersive multimedia systems in real-world applications. Despite
the success of perceptual quality evaluation for 2D images and videos,
blind/no-reference metrics are still scarce for 3D point clouds with
large-scale irregularly distributed 3D points. Therefore, in this paper, we
propose an objective point cloud quality index with Structure Guided Resampling
(SGR) to automatically evaluate the perceptually visual quality of 3D dense
point clouds. The proposed SGR is a general-purpose blind quality assessment
method without the assistance of any reference information. Specifically,
considering that the human visual system (HVS) is highly sensitive to structure
information, we first exploit the unique normal vectors of point clouds to
execute regional pre-processing which consists of keypoint resampling and local
region construction. Then, we extract three groups of quality-related features,
including: 1) geometry density features; 2) color naturalness features; 3)
angular consistency features. Both the cognitive peculiarities of the human
brain and naturalness regularity are involved in the designed quality-aware
features that can capture the most vital aspects of distorted 3D point clouds.
Extensive experiments on several publicly available subjective point cloud
quality databases validate that our proposed SGR can compete with
state-of-the-art full-reference, reduced-reference, and no-reference quality
assessment algorithms.
Related papers
- Activating Frequency and ViT for 3D Point Cloud Quality Assessment
without Reference [0.49157446832511503]
We propose no-reference quality metric of a given 3D-PC.
To map the input attributes to quality score, we use a light-weight hybrid deep model; combined of Deformable Convolutional Network (DCN) and Vision Transformers (ViT)
The results show that our approach outperforms state-of-the-art NR-PCQA measures and even some FR-PCQA on PointXR.
arXiv Detail & Related papers (2023-12-10T19:13:34Z) - PointHPS: Cascaded 3D Human Pose and Shape Estimation from Point Clouds [99.60575439926963]
We propose a principled framework, PointHPS, for accurate 3D HPS from point clouds captured in real-world settings.
PointHPS iteratively refines point features through a cascaded architecture.
Extensive experiments demonstrate that PointHPS, with its powerful point feature extraction and processing scheme, outperforms State-of-the-Art methods.
arXiv Detail & Related papers (2023-08-28T11:10:14Z) - Reduced-Reference Quality Assessment of Point Clouds via
Content-Oriented Saliency Projection [17.983188216548005]
Many dense 3D point clouds have been exploited to represent visual objects instead of traditional images or videos.
We propose a novel and efficient Reduced-Reference quality metric for point clouds.
arXiv Detail & Related papers (2023-01-18T18:00:29Z) - PCQA-GRAPHPOINT: Efficients Deep-Based Graph Metric For Point Cloud
Quality Assessment [11.515951211296361]
3D Point Clouds (PC) have emerged as a promising solution and effective means to display 3D visual information.
This paper introduces a novel and efficient objective metric for Point Clouds Quality Assessment, by learning intrinsic local dependencies using Graph Neural Network (GNN)
The results demonstrate the effectiveness and reliability of our solution compared to state-of-the-art metrics.
arXiv Detail & Related papers (2022-11-04T13:45:54Z) - MM-PCQA: Multi-Modal Learning for No-reference Point Cloud Quality
Assessment [32.495387943305204]
We propose a novel no-reference point cloud quality assessment (NR-PCQA) metric in a multi-modal fashion.
In specific, we split the point clouds into sub-models to represent local geometry distortions such as point shift and down-sampling.
To achieve the goals, the sub-models and projected images are encoded with point-based and image-based neural networks.
arXiv Detail & Related papers (2022-09-01T06:11:12Z) - Evaluating Point Cloud from Moving Camera Videos: A No-Reference Metric [58.309735075960745]
This paper explores the way of dealing with point cloud quality assessment (PCQA) tasks via video quality assessment (VQA) methods.
We generate the captured videos by rotating the camera around the point clouds through several circular pathways.
We extract both spatial and temporal quality-aware features from the selected key frames and the video clips through using trainable 2D-CNN and pre-trained 3D-CNN models.
arXiv Detail & Related papers (2022-08-30T08:59:41Z) - AGO-Net: Association-Guided 3D Point Cloud Object Detection Network [86.10213302724085]
We propose a novel 3D detection framework that associates intact features for objects via domain adaptation.
We achieve new state-of-the-art performance on the KITTI 3D detection benchmark in both accuracy and speed.
arXiv Detail & Related papers (2022-08-24T16:54:38Z) - No-Reference Quality Assessment for Colored Point Cloud and Mesh Based
on Natural Scene Statistics [36.017914479449864]
We propose an NSS-based no-reference quality assessment metric for colored 3D models.
Our method is mainly validated on the colored point cloud quality assessment database (SJTU-PCQA) and the colored mesh quality assessment database (CMDM)
arXiv Detail & Related papers (2021-07-05T14:03:15Z) - PC-RGNN: Point Cloud Completion and Graph Neural Network for 3D Object
Detection [57.49788100647103]
LiDAR-based 3D object detection is an important task for autonomous driving.
Current approaches suffer from sparse and partial point clouds of distant and occluded objects.
In this paper, we propose a novel two-stage approach, namely PC-RGNN, dealing with such challenges by two specific solutions.
arXiv Detail & Related papers (2020-12-18T18:06:43Z) - Reduced Reference Perceptual Quality Model and Application to Rate
Control for 3D Point Cloud Compression [61.110938359555895]
In rate-distortion optimization, the encoder settings are determined by maximizing a reconstruction quality measure subject to a constraint on the bit rate.
We propose a linear perceptual quality model whose variables are the V-PCC geometry and color quantization parameters.
Subjective quality tests with 400 compressed 3D point clouds show that the proposed model correlates well with the mean opinion score.
We show that for the same target bit rate, ratedistortion optimization based on the proposed model offers higher perceptual quality than rate-distortion optimization based on exhaustive search with a point-to-point objective quality metric.
arXiv Detail & Related papers (2020-11-25T12:42:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.