QD-PCQA: Quality-Aware Domain Adaptation for Point Cloud Quality Assessment
- URL: http://arxiv.org/abs/2603.03726v1
- Date: Wed, 04 Mar 2026 04:58:07 GMT
- Title: QD-PCQA: Quality-Aware Domain Adaptation for Point Cloud Quality Assessment
- Authors: Guohua Zhang, Jian Jin, Meiqin Liu, Chao Yao, Weisi Lin,
- Abstract summary: No-Reference Point Cloud Quality Assessment (NR-PCQA) still struggles with generalization.<n>Human Visual System (HVS) drives perceptual quality assessment independently of media types.<n>We propose a novel Quality-aware Domain adaptation framework for PCQA, termed QD-PCQA.
- Score: 59.63956655216264
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: No-Reference Point Cloud Quality Assessment (NR-PCQA) still struggles with generalization, primarily due to the scarcity of annotated point cloud datasets. Since the Human Visual System (HVS) drives perceptual quality assessment independently of media types, prior knowledge on quality learned from images can be repurposed for point clouds. This insight motivates adopting Unsupervised Domain Adaptation (UDA) to transfer quality-relevant priors from labeled images to unlabeled point clouds. However, existing UDA-based PCQA methods often overlook key characteristics of perceptual quality, such as sensitivity to quality ranking and quality-aware feature alignment, thereby limiting their effectiveness. To address these issues, we propose a novel Quality-aware Domain adaptation framework for PCQA, termed QD-PCQA. The framework comprises two main components: i) a Rank-weighted Conditional Alignment (RCA) strategy that aligns features under consistent quality levels and adaptively emphasizes misranked samples to reinforce perceptual quality ranking awareness; and ii) a Quality-guided Feature Augmentation (QFA) strategy, which includes quality-guided style mixup, multi-layer extension, and dual-domain augmentation modules to augment perceptual feature alignment. Extensive cross-domain experiments demonstrate that QD-PCQA significantly improves generalization in NR-PCQA tasks. The code is available at https://github.com/huhu-code/QD-PCQA.
Related papers
- Life-IQA: Boosting Blind Image Quality Assessment through GCN-enhanced Layer Interaction and MoE-based Feature Decoupling [53.74410422225995]
Blind image quality assessment (BIQA) plays a crucial role in evaluating and optimizing visual experience.<n>Most existing BIQA approaches fuse shallow and deep features extracted from backbone networks, while overlooking the unequal contributions to quality prediction.<n>This paper investigates the contributions of shallow and deep features to BIQA, and proposes a effective quality feature decoding framework via GCN-enhanced underlinelayerunderlineinteraction and MoE-based underlinefeature dunderlineecoupling, termed textbf(Life-IQA).
arXiv Detail & Related papers (2025-11-24T11:59:55Z) - No-Reference Point Cloud Quality Assessment via Graph Convolutional Network [89.12589881881082]
Three-dimensional (3D) point cloud, as an emerging visual media format, is increasingly favored by consumers.
Point clouds inevitably suffer from quality degradation and information loss through multimedia communication systems.
We propose a novel no-reference PCQA method by using a graph convolutional network (GCN) to characterize the mutual dependencies of multi-view 2D projected image contents.
arXiv Detail & Related papers (2024-11-12T11:39:05Z) - Few-Shot Image Quality Assessment via Adaptation of Vision-Language Models [93.91086467402323]
Gradient-Regulated Meta-Prompt IQA Framework (GRMP-IQA) designed to efficiently adapt the visual-language pre-trained model, CLIP, to IQA tasks.<n> GRMP-IQA consists of two core modules: (i) Meta-Prompt Pre-training Module and (ii) Quality-Aware Gradient Regularization.
arXiv Detail & Related papers (2024-09-09T07:26:21Z) - Full-reference Point Cloud Quality Assessment Using Spectral Graph Wavelets [29.126056066012264]
Point clouds in 3D applications frequently experience quality degradation during processing, e.g., scanning and compression.
This paper introduces a full-reference (FR) PCQA method utilizing spectral graph wavelets (SGWs)
To our knowledge, this is the first study to introduce SGWs for PCQA.
arXiv Detail & Related papers (2024-06-14T06:59:54Z) - Contrastive Pre-Training with Multi-View Fusion for No-Reference Point Cloud Quality Assessment [49.36799270585947]
No-reference point cloud quality assessment (NR-PCQA) aims to automatically evaluate the perceptual quality of distorted point clouds without available reference.
We propose a novel contrastive pre-training framework tailored for PCQA (CoPA)
Our method outperforms the state-of-the-art PCQA methods on popular benchmarks.
arXiv Detail & Related papers (2024-03-15T07:16:07Z) - Simple Baselines for Projection-based Full-reference and No-reference
Point Cloud Quality Assessment [60.2709006613171]
We propose simple baselines for projection-based point cloud quality assessment (PCQA)
We use multi-projections obtained via a common cube-like projection process from the point clouds for both full-reference (FR) and no-reference (NR) PCQA tasks.
Taking part in the ICIP 2023 PCVQA Challenge, we succeeded in achieving the top spot in four out of the five competition tracks.
arXiv Detail & Related papers (2023-10-26T04:42:57Z) - No-Reference Point Cloud Quality Assessment via Weighted Patch Quality
Prediction [19.128878108831287]
We propose a no-reference point cloud quality assessment (NR-PCQA) method with local area correlation analysis capability, denoted as COPP-Net.
More specifically, we split a point cloud into patches, generate texture and structure features for each patch, and fuse them into patch features to predict patch quality.
Experimental results show that our method outperforms the state-of-the-art benchmark NR-PCQA methods.
arXiv Detail & Related papers (2023-05-13T03:20:33Z) - Progressive Knowledge Transfer Based on Human Visual Perception
Mechanism for Perceptual Quality Assessment of Point Clouds [21.50682830021656]
A progressive knowledge transfer based on human visual perception mechanism for perceptual quality assessment of point clouds (PKT-PCQA) is proposed.
Experiments on three large and independent point cloud assessment datasets show that the proposed no reference PKT-PCQA network achieves better of equivalent performance.
arXiv Detail & Related papers (2022-11-30T00:27:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.