Uncertainty-aware No-Reference Point Cloud Quality Assessment
- URL: http://arxiv.org/abs/2401.08926v1
- Date: Wed, 17 Jan 2024 02:25:42 GMT
- Title: Uncertainty-aware No-Reference Point Cloud Quality Assessment
- Authors: Songlin Fan, Zixuan Guo, Wei Gao, Ge Li
- Abstract summary: This work presents the first probabilistic architecture for no-reference point cloud quality assessment (PCQA)
The proposed method can model the quality judgingity of subjects through a tailored conditional variational autoencoder (AE)
Experiments indicate that our approach mimics previous cutting-edge methods by a large margin and exhibits cross-dataset experiments.
- Score: 25.543217625958462
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The evolution of compression and enhancement algorithms necessitates an
accurate quality assessment for point clouds. Previous works consistently
regard point cloud quality assessment (PCQA) as a MOS regression problem and
devise a deterministic mapping, ignoring the stochasticity in generating MOS
from subjective tests. Besides, the viewpoint switching of 3D point clouds in
subjective tests reinforces the judging stochasticity of different subjects
compared with traditional images. This work presents the first probabilistic
architecture for no-reference PCQA, motivated by the labeling process of
existing datasets. The proposed method can model the quality judging
stochasticity of subjects through a tailored conditional variational
autoencoder (CVAE) and produces multiple intermediate quality ratings. These
intermediate ratings simulate the judgments from different subjects and are
then integrated into an accurate quality prediction, mimicking the generation
process of a ground truth MOS. Specifically, our method incorporates a Prior
Module, a Posterior Module, and a Quality Rating Generator, where the former
two modules are introduced to model the judging stochasticity in subjective
tests, while the latter is developed to generate diverse quality ratings.
Extensive experiments indicate that our approach outperforms previous
cutting-edge methods by a large margin and exhibits gratifying cross-dataset
robustness.
Related papers
- Adaptive Image Quality Assessment via Teaching Large Multimodal Model to Compare [99.57567498494448]
We introduce Compare2Score, an all-around LMM-based no-reference IQA model.
During training, we generate scaled-up comparative instructions by comparing images from the same IQA dataset.
Experiments on nine IQA datasets validate that the Compare2Score effectively bridges text-defined comparative levels during training.
arXiv Detail & Related papers (2024-05-29T17:26:09Z) - Opinion-Unaware Blind Image Quality Assessment using Multi-Scale Deep Feature Statistics [54.08757792080732]
We propose integrating deep features from pre-trained visual models with a statistical analysis model to achieve opinion-unaware BIQA (OU-BIQA)
Our proposed model exhibits superior consistency with human visual perception compared to state-of-the-art BIQA models.
arXiv Detail & Related papers (2024-05-29T06:09:34Z) - Contrastive Pre-Training with Multi-View Fusion for No-Reference Point Cloud Quality Assessment [49.36799270585947]
No-reference point cloud quality assessment (NR-PCQA) aims to automatically evaluate the perceptual quality of distorted point clouds without available reference.
We propose a novel contrastive pre-training framework tailored for PCQA (CoPA)
Our method outperforms the state-of-the-art PCQA methods on popular benchmarks.
arXiv Detail & Related papers (2024-03-15T07:16:07Z) - PAME: Self-Supervised Masked Autoencoder for No-Reference Point Cloud Quality Assessment [34.256276774430575]
No-reference point cloud quality assessment (NR-PCQA) aims to automatically predict the perceptual quality of point clouds without reference.
We propose a self-supervised pre-training framework using masked autoencoders (PAME) to help the model learn useful representations without labels.
Our method outperforms the state-of-the-art NR-PCQA methods on popular benchmarks in terms of prediction accuracy and generalizability.
arXiv Detail & Related papers (2024-03-15T07:01:33Z) - Adaptive Feature Selection for No-Reference Image Quality Assessment by Mitigating Semantic Noise Sensitivity [55.399230250413986]
We propose a Quality-Aware Feature Matching IQA Metric (QFM-IQM) to remove harmful semantic noise features from the upstream task.
Our approach achieves superior performance to the state-of-the-art NR-IQA methods on eight standard IQA datasets.
arXiv Detail & Related papers (2023-12-11T06:50:27Z) - QualEval: Qualitative Evaluation for Model Improvement [82.73561470966658]
We propose QualEval, which augments quantitative scalar metrics with automated qualitative evaluation as a vehicle for model improvement.
QualEval uses a powerful LLM reasoner and our novel flexible linear programming solver to generate human-readable insights.
We demonstrate that leveraging its insights, for example, improves the absolute performance of the Llama 2 model by up to 15% points relative.
arXiv Detail & Related papers (2023-11-06T00:21:44Z) - Image Quality Assessment: Integrating Model-Centric and Data-Centric
Approaches [20.931709027443706]
Learning-based image quality assessment (IQA) has made remarkable progress in the past decade.
Nearly all consider the two key components -- model and data -- in isolation.
arXiv Detail & Related papers (2022-07-29T16:23:57Z) - FUNQUE: Fusion of Unified Quality Evaluators [42.41484412777326]
Fusion-based quality assessment has emerged as a powerful method for developing high-performance quality models.
We propose FUNQUE, a quality model that fuses unified quality evaluators.
arXiv Detail & Related papers (2022-02-23T00:21:43Z) - Task-Specific Normalization for Continual Learning of Blind Image
Quality Models [105.03239956378465]
We present a simple yet effective continual learning method for blind image quality assessment (BIQA)
The key step in our approach is to freeze all convolution filters of a pre-trained deep neural network (DNN) for an explicit promise of stability.
We assign each new IQA dataset (i.e., task) a prediction head, and load the corresponding normalization parameters to produce a quality score.
The final quality estimate is computed by black a weighted summation of predictions from all heads with a lightweight $K$-means gating mechanism.
arXiv Detail & Related papers (2021-07-28T15:21:01Z) - Generating Adversarial Examples with an Optimized Quality [12.747258403133035]
Deep learning models are vulnerable to Adversarial Examples (AEs),carefully crafted samples to deceive those models.
Recent studies have introduced new adversarial attack methods, but none provided guaranteed quality for the crafted examples.
In this paper, we incorporateImage Quality Assessment (IQA) metrics into the design and generation process of AEs.
arXiv Detail & Related papers (2020-06-30T23:05:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.