Progressive Knowledge Transfer Based on Human Visual Perception
Mechanism for Perceptual Quality Assessment of Point Clouds
- URL: http://arxiv.org/abs/2211.16646v1
- Date: Wed, 30 Nov 2022 00:27:58 GMT
- Title: Progressive Knowledge Transfer Based on Human Visual Perception
Mechanism for Perceptual Quality Assessment of Point Clouds
- Authors: Qi Liu, Yiyun Liu, Honglei Su, Hui Yuan, and Raouf Hamzaoui
- Abstract summary: A progressive knowledge transfer based on human visual perception mechanism for perceptual quality assessment of point clouds (PKT-PCQA) is proposed.
Experiments on three large and independent point cloud assessment datasets show that the proposed no reference PKT-PCQA network achieves better of equivalent performance.
- Score: 21.50682830021656
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the wide applications of colored point cloud in many fields, point cloud
perceptual quality assessment plays a vital role in the visual communication
systems owing to the existence of quality degradations introduced in various
stages. However, the existing point cloud quality assessments ignore the
mechanism of human visual system (HVS) which has an important impact on the
accuracy of the perceptual quality assessment. In this paper, a progressive
knowledge transfer based on human visual perception mechanism for perceptual
quality assessment of point clouds (PKT-PCQA) is proposed. The PKT-PCQA merges
local features from neighboring regions and global features extracted from
graph spectrum. Taking into account the HVS properties, the spatial and channel
attention mechanism is also considered in PKT-PCQA. Besides, inspired by the
hierarchical perception system of human brains, PKT-PCQA adopts a progressive
knowledge transfer to convert the coarse-grained quality classification
knowledge to the fine-grained quality prediction task. Experiments on three
large and independent point cloud assessment datasets show that the proposed no
reference PKT-PCQA network achieves better of equivalent performance comparing
with the state-of-the-art full reference quality assessment methods,
outperforming the existed no reference quality assessment network.
Related papers
- LMM-PCQA: Assisting Point Cloud Quality Assessment with LMM [83.98966702271576]
This study aims to investigate the feasibility of imparting Point Cloud Quality Assessment (PCQA) knowledge to large multi-modality models (LMMs)
We transform quality labels into textual descriptions during the fine-tuning phase, enabling LMMs to derive quality rating logits from 2D projections of point clouds.
Our experimental results affirm the effectiveness of our approach, showcasing a novel integration of LMMs into PCQA.
arXiv Detail & Related papers (2024-04-28T14:47:09Z) - Contrastive Pre-Training with Multi-View Fusion for No-Reference Point Cloud Quality Assessment [49.36799270585947]
No-reference point cloud quality assessment (NR-PCQA) aims to automatically evaluate the perceptual quality of distorted point clouds without available reference.
We propose a novel contrastive pre-training framework tailored for PCQA (CoPA)
Our method outperforms the state-of-the-art PCQA methods on popular benchmarks.
arXiv Detail & Related papers (2024-03-15T07:16:07Z) - Simple Baselines for Projection-based Full-reference and No-reference
Point Cloud Quality Assessment [60.2709006613171]
We propose simple baselines for projection-based point cloud quality assessment (PCQA)
We use multi-projections obtained via a common cube-like projection process from the point clouds for both full-reference (FR) and no-reference (NR) PCQA tasks.
Taking part in the ICIP 2023 PCVQA Challenge, we succeeded in achieving the top spot in four out of the five competition tracks.
arXiv Detail & Related papers (2023-10-26T04:42:57Z) - No-Reference Point Cloud Quality Assessment via Weighted Patch Quality
Prediction [19.128878108831287]
We propose a no-reference point cloud quality assessment (NR-PCQA) method with local area correlation analysis capability, denoted as COPP-Net.
More specifically, we split a point cloud into patches, generate texture and structure features for each patch, and fuse them into patch features to predict patch quality.
Experimental results show that our method outperforms the state-of-the-art benchmark NR-PCQA methods.
arXiv Detail & Related papers (2023-05-13T03:20:33Z) - Reduced-Reference Quality Assessment of Point Clouds via
Content-Oriented Saliency Projection [17.983188216548005]
Many dense 3D point clouds have been exploited to represent visual objects instead of traditional images or videos.
We propose a novel and efficient Reduced-Reference quality metric for point clouds.
arXiv Detail & Related papers (2023-01-18T18:00:29Z) - Point Cloud Quality Assessment using 3D Saliency Maps [37.290843791053256]
We propose an effective full-reference PCQA metric which makes the first attempt to utilize the saliency information to facilitate quality prediction.
Specifically, we first propose a projection-based point cloud saliency map generation method, in which depth information is introduced to better reflect the geometric characteristics of point clouds.
Finally, a saliency-based pooling strategy is proposed to generate the final quality score.
arXiv Detail & Related papers (2022-09-30T13:59:09Z) - Blind Quality Assessment of 3D Dense Point Clouds with Structure Guided
Resampling [71.68672977990403]
We propose an objective point cloud quality index with Structure Guided Resampling (SGR) to automatically evaluate the perceptually visual quality of 3D dense point clouds.
The proposed SGR is a general-purpose blind quality assessment method without the assistance of any reference information.
arXiv Detail & Related papers (2022-08-31T02:42:55Z) - Evaluating Point Cloud from Moving Camera Videos: A No-Reference Metric [58.309735075960745]
This paper explores the way of dealing with point cloud quality assessment (PCQA) tasks via video quality assessment (VQA) methods.
We generate the captured videos by rotating the camera around the point clouds through several circular pathways.
We extract both spatial and temporal quality-aware features from the selected key frames and the video clips through using trainable 2D-CNN and pre-trained 3D-CNN models.
arXiv Detail & Related papers (2022-08-30T08:59:41Z) - Textural-Structural Joint Learning for No-Reference Super-Resolution
Image Quality Assessment [59.91741119995321]
We develop a dual stream network to jointly explore the textural and structural information for quality prediction, dubbed TSNet.
By mimicking the human vision system (HVS) that pays more attention to the significant areas of the image, we develop the spatial attention mechanism to make the visual-sensitive areas more distinguishable.
Experimental results show the proposed TSNet predicts the visual quality more accurate than the state-of-the-art IQA methods, and demonstrates better consistency with the human's perspective.
arXiv Detail & Related papers (2022-05-27T09:20:06Z) - No-Reference Point Cloud Quality Assessment via Domain Adaptation [31.280188860021248]
We present a novel no-reference quality assessment metric, the image transferred point cloud quality assessment (IT-PCQA) for 3D point clouds.
In particular, we treat natural images as the source domain and point clouds as the target domain, and infer point cloud quality via unsupervised adversarial domain adaptation.
Experimental results show that the proposed method can achieve higher performance than traditional no-reference metrics, even comparable results with full-reference metrics.
arXiv Detail & Related papers (2021-12-06T08:20:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.