Simple Baselines for Projection-based Full-reference and No-reference
Point Cloud Quality Assessment
- URL: http://arxiv.org/abs/2310.17147v1
- Date: Thu, 26 Oct 2023 04:42:57 GMT
- Title: Simple Baselines for Projection-based Full-reference and No-reference
Point Cloud Quality Assessment
- Authors: Zicheng Zhang, Yingjie Zhou, Wei Sun, Xiongkuo Min, Guangtao Zhai
- Abstract summary: We propose simple baselines for projection-based point cloud quality assessment (PCQA)
We use multi-projections obtained via a common cube-like projection process from the point clouds for both full-reference (FR) and no-reference (NR) PCQA tasks.
Taking part in the ICIP 2023 PCVQA Challenge, we succeeded in achieving the top spot in four out of the five competition tracks.
- Score: 60.2709006613171
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Point clouds are widely used in 3D content representation and have various
applications in multimedia. However, compression and simplification processes
inevitably result in the loss of quality-aware information under storage and
bandwidth constraints. Therefore, there is an increasing need for effective
methods to quantify the degree of distortion in point clouds. In this paper, we
propose simple baselines for projection-based point cloud quality assessment
(PCQA) to tackle this challenge. We use multi-projections obtained via a common
cube-like projection process from the point clouds for both full-reference (FR)
and no-reference (NR) PCQA tasks. Quality-aware features are extracted with
popular vision backbones. The FR quality representation is computed as the
similarity between the feature maps of reference and distorted projections
while the NR quality representation is obtained by simply squeezing the feature
maps of distorted projections with average pooling The corresponding quality
representations are regressed into visual quality scores by fully-connected
layers. Taking part in the ICIP 2023 PCVQA Challenge, we succeeded in achieving
the top spot in four out of the five competition tracks.
Related papers
- No-Reference Point Cloud Quality Assessment via Graph Convolutional Network [89.12589881881082]
Three-dimensional (3D) point cloud, as an emerging visual media format, is increasingly favored by consumers.
Point clouds inevitably suffer from quality degradation and information loss through multimedia communication systems.
We propose a novel no-reference PCQA method by using a graph convolutional network (GCN) to characterize the mutual dependencies of multi-view 2D projected image contents.
arXiv Detail & Related papers (2024-11-12T11:39:05Z) - Contrastive Pre-Training with Multi-View Fusion for No-Reference Point Cloud Quality Assessment [49.36799270585947]
No-reference point cloud quality assessment (NR-PCQA) aims to automatically evaluate the perceptual quality of distorted point clouds without available reference.
We propose a novel contrastive pre-training framework tailored for PCQA (CoPA)
Our method outperforms the state-of-the-art PCQA methods on popular benchmarks.
arXiv Detail & Related papers (2024-03-15T07:16:07Z) - Activating Frequency and ViT for 3D Point Cloud Quality Assessment
without Reference [0.49157446832511503]
We propose no-reference quality metric of a given 3D-PC.
To map the input attributes to quality score, we use a light-weight hybrid deep model; combined of Deformable Convolutional Network (DCN) and Vision Transformers (ViT)
The results show that our approach outperforms state-of-the-art NR-PCQA measures and even some FR-PCQA on PointXR.
arXiv Detail & Related papers (2023-12-10T19:13:34Z) - Reduced-Reference Quality Assessment of Point Clouds via
Content-Oriented Saliency Projection [17.983188216548005]
Many dense 3D point clouds have been exploited to represent visual objects instead of traditional images or videos.
We propose a novel and efficient Reduced-Reference quality metric for point clouds.
arXiv Detail & Related papers (2023-01-18T18:00:29Z) - MM-PCQA: Multi-Modal Learning for No-reference Point Cloud Quality
Assessment [32.495387943305204]
We propose a novel no-reference point cloud quality assessment (NR-PCQA) metric in a multi-modal fashion.
In specific, we split the point clouds into sub-models to represent local geometry distortions such as point shift and down-sampling.
To achieve the goals, the sub-models and projected images are encoded with point-based and image-based neural networks.
arXiv Detail & Related papers (2022-09-01T06:11:12Z) - Blind Quality Assessment of 3D Dense Point Clouds with Structure Guided
Resampling [71.68672977990403]
We propose an objective point cloud quality index with Structure Guided Resampling (SGR) to automatically evaluate the perceptually visual quality of 3D dense point clouds.
The proposed SGR is a general-purpose blind quality assessment method without the assistance of any reference information.
arXiv Detail & Related papers (2022-08-31T02:42:55Z) - Evaluating Point Cloud from Moving Camera Videos: A No-Reference Metric [58.309735075960745]
This paper explores the way of dealing with point cloud quality assessment (PCQA) tasks via video quality assessment (VQA) methods.
We generate the captured videos by rotating the camera around the point clouds through several circular pathways.
We extract both spatial and temporal quality-aware features from the selected key frames and the video clips through using trainable 2D-CNN and pre-trained 3D-CNN models.
arXiv Detail & Related papers (2022-08-30T08:59:41Z) - Reduced Reference Perceptual Quality Model and Application to Rate
Control for 3D Point Cloud Compression [61.110938359555895]
In rate-distortion optimization, the encoder settings are determined by maximizing a reconstruction quality measure subject to a constraint on the bit rate.
We propose a linear perceptual quality model whose variables are the V-PCC geometry and color quantization parameters.
Subjective quality tests with 400 compressed 3D point clouds show that the proposed model correlates well with the mean opinion score.
We show that for the same target bit rate, ratedistortion optimization based on the proposed model offers higher perceptual quality than rate-distortion optimization based on exhaustive search with a point-to-point objective quality metric.
arXiv Detail & Related papers (2020-11-25T12:42:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.