Point Cloud Quality Assessment using 3D Saliency Maps
- URL: http://arxiv.org/abs/2209.15475v1
- Date: Fri, 30 Sep 2022 13:59:09 GMT
- Title: Point Cloud Quality Assessment using 3D Saliency Maps
- Authors: Zhengyu Wang, Yujie Zhang, Qi Yang, Yiling Xu, Jun Sun, and Shan Liu
- Abstract summary: We propose an effective full-reference PCQA metric which makes the first attempt to utilize the saliency information to facilitate quality prediction.
Specifically, we first propose a projection-based point cloud saliency map generation method, in which depth information is introduced to better reflect the geometric characteristics of point clouds.
Finally, a saliency-based pooling strategy is proposed to generate the final quality score.
- Score: 37.290843791053256
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Point cloud quality assessment (PCQA) has become an appealing research field
in recent days. Considering the importance of saliency detection in quality
assessment, we propose an effective full-reference PCQA metric which makes the
first attempt to utilize the saliency information to facilitate quality
prediction, called point cloud quality assessment using 3D saliency maps
(PQSM). Specifically, we first propose a projection-based point cloud saliency
map generation method, in which depth information is introduced to better
reflect the geometric characteristics of point clouds. Then, we construct point
cloud local neighborhoods to derive three structural descriptors to indicate
the geometry, color and saliency discrepancies. Finally, a saliency-based
pooling strategy is proposed to generate the final quality score. Extensive
experiments are performed on four independent PCQA databases. The results
demonstrate that the proposed PQSM shows competitive performances compared to
multiple state-of-the-art PCQA metrics.
Related papers
- Full-reference Point Cloud Quality Assessment Using Spectral Graph Wavelets [29.126056066012264]
Point clouds in 3D applications frequently experience quality degradation during processing, e.g., scanning and compression.
This paper introduces a full-reference (FR) PCQA method utilizing spectral graph wavelets (SGWs)
To our knowledge, this is the first study to introduce SGWs for PCQA.
arXiv Detail & Related papers (2024-06-14T06:59:54Z) - Contrastive Pre-Training with Multi-View Fusion for No-Reference Point Cloud Quality Assessment [49.36799270585947]
No-reference point cloud quality assessment (NR-PCQA) aims to automatically evaluate the perceptual quality of distorted point clouds without available reference.
We propose a novel contrastive pre-training framework tailored for PCQA (CoPA)
Our method outperforms the state-of-the-art PCQA methods on popular benchmarks.
arXiv Detail & Related papers (2024-03-15T07:16:07Z) - PointPCA+: Extending PointPCA objective quality assessment metric [4.674509064536047]
PointPCA+ is a set of perceptually-relevant descriptors for Point Cloud Quality Assessment (PCQA) metric.
PointPCA+ employs PCA only on the geometry data while enriching existing geometry and texture descriptors, that are computed more efficiently.
Tests show that PointPCA+ achieves predictive high performance against subjective ground truth scores obtained from publicly available datasets.
arXiv Detail & Related papers (2023-11-23T10:05:31Z) - Simple Baselines for Projection-based Full-reference and No-reference
Point Cloud Quality Assessment [60.2709006613171]
We propose simple baselines for projection-based point cloud quality assessment (PCQA)
We use multi-projections obtained via a common cube-like projection process from the point clouds for both full-reference (FR) and no-reference (NR) PCQA tasks.
Taking part in the ICIP 2023 PCVQA Challenge, we succeeded in achieving the top spot in four out of the five competition tracks.
arXiv Detail & Related papers (2023-10-26T04:42:57Z) - No-Reference Point Cloud Quality Assessment via Weighted Patch Quality
Prediction [19.128878108831287]
We propose a no-reference point cloud quality assessment (NR-PCQA) method with local area correlation analysis capability, denoted as COPP-Net.
More specifically, we split a point cloud into patches, generate texture and structure features for each patch, and fuse them into patch features to predict patch quality.
Experimental results show that our method outperforms the state-of-the-art benchmark NR-PCQA methods.
arXiv Detail & Related papers (2023-05-13T03:20:33Z) - MM-PCQA: Multi-Modal Learning for No-reference Point Cloud Quality
Assessment [32.495387943305204]
We propose a novel no-reference point cloud quality assessment (NR-PCQA) metric in a multi-modal fashion.
In specific, we split the point clouds into sub-models to represent local geometry distortions such as point shift and down-sampling.
To achieve the goals, the sub-models and projected images are encoded with point-based and image-based neural networks.
arXiv Detail & Related papers (2022-09-01T06:11:12Z) - Blind Quality Assessment of 3D Dense Point Clouds with Structure Guided
Resampling [71.68672977990403]
We propose an objective point cloud quality index with Structure Guided Resampling (SGR) to automatically evaluate the perceptually visual quality of 3D dense point clouds.
The proposed SGR is a general-purpose blind quality assessment method without the assistance of any reference information.
arXiv Detail & Related papers (2022-08-31T02:42:55Z) - Evaluating Point Cloud from Moving Camera Videos: A No-Reference Metric [58.309735075960745]
This paper explores the way of dealing with point cloud quality assessment (PCQA) tasks via video quality assessment (VQA) methods.
We generate the captured videos by rotating the camera around the point clouds through several circular pathways.
We extract both spatial and temporal quality-aware features from the selected key frames and the video clips through using trainable 2D-CNN and pre-trained 3D-CNN models.
arXiv Detail & Related papers (2022-08-30T08:59:41Z) - Reduced Reference Perceptual Quality Model and Application to Rate
Control for 3D Point Cloud Compression [61.110938359555895]
In rate-distortion optimization, the encoder settings are determined by maximizing a reconstruction quality measure subject to a constraint on the bit rate.
We propose a linear perceptual quality model whose variables are the V-PCC geometry and color quantization parameters.
Subjective quality tests with 400 compressed 3D point clouds show that the proposed model correlates well with the mean opinion score.
We show that for the same target bit rate, ratedistortion optimization based on the proposed model offers higher perceptual quality than rate-distortion optimization based on exhaustive search with a point-to-point objective quality metric.
arXiv Detail & Related papers (2020-11-25T12:42:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.