The Worse The Better: Content-Aware Viewpoint Generation Network for Projection-related Point Cloud Quality Assessment
- URL: http://arxiv.org/abs/2502.11710v1
- Date: Mon, 17 Feb 2025 11:50:42 GMT
- Title: The Worse The Better: Content-Aware Viewpoint Generation Network for Projection-related Point Cloud Quality Assessment
- Authors: Zhiyong Su, Bingxu Xie, Zheng Li, Jincan Wu, Weiqing Li,
- Abstract summary: This paper presents a novel content-aware viewpoint generation network (CAVGN) to learn better viewpoints.
To train the proposed CAVGN, we present a self-supervised viewpoint network (SSVRN) to select the viewpoint with the worst quality projected image.
Experimental results show that the projection-related PCQA methods can achieve higher performance using the viewpoints generated by the proposed CAVGN.
- Score: 9.810669754561546
- License:
- Abstract: Through experimental studies, however, we observed the instability of final predicted quality scores, which change significantly over different viewpoint settings. Inspired by the "wooden barrel theory", given the default content-independent viewpoints of existing projection-related PCQA approaches, this paper presents a novel content-aware viewpoint generation network (CAVGN) to learn better viewpoints by taking the distribution of geometric and attribute features of degraded point clouds into consideration. Firstly, the proposed CAVGN extracts multi-scale geometric and texture features of the entire input point cloud, respectively. Then, for each default content-independent viewpoint, the extracted geometric and texture features are refined to focus on its corresponding visible part of the input point cloud. Finally, the refined geometric and texture features are concatenated to generate an optimized viewpoint. To train the proposed CAVGN, we present a self-supervised viewpoint ranking network (SSVRN) to select the viewpoint with the worst quality projected image to construct a default-optimized viewpoint dataset, which consists of thousands of paired default viewpoints and corresponding optimized viewpoints. Experimental results show that the projection-related PCQA methods can achieve higher performance using the viewpoints generated by the proposed CAVGN.
Related papers
- No-Reference Point Cloud Quality Assessment via Graph Convolutional Network [89.12589881881082]
Three-dimensional (3D) point cloud, as an emerging visual media format, is increasingly favored by consumers.
Point clouds inevitably suffer from quality degradation and information loss through multimedia communication systems.
We propose a novel no-reference PCQA method by using a graph convolutional network (GCN) to characterize the mutual dependencies of multi-view 2D projected image contents.
arXiv Detail & Related papers (2024-11-12T11:39:05Z) - Self-supervised 3D Point Cloud Completion via Multi-view Adversarial Learning [61.14132533712537]
We propose MAL-SPC, a framework that effectively leverages both object-level and category-specific geometric similarities to complete missing structures.
Our MAL-SPC does not require any 3D complete supervision and only necessitates a single partial point cloud for each object.
arXiv Detail & Related papers (2024-07-13T06:53:39Z) - PointOBB: Learning Oriented Object Detection via Single Point
Supervision [55.88982271340328]
This paper proposes PointOBB, the first single Point-based OBB generation method, for oriented object detection.
PointOBB operates through the collaborative utilization of three distinctive views: an original view, a resized view, and a rotated/flipped (rot/flp) view.
Experimental results on the DIOR-R and DOTA-v1.0 datasets demonstrate that PointOBB achieves promising performance.
arXiv Detail & Related papers (2023-11-23T15:51:50Z) - Reduced-Reference Quality Assessment of Point Clouds via
Content-Oriented Saliency Projection [17.983188216548005]
Many dense 3D point clouds have been exploited to represent visual objects instead of traditional images or videos.
We propose a novel and efficient Reduced-Reference quality metric for point clouds.
arXiv Detail & Related papers (2023-01-18T18:00:29Z) - TCDM: Transformational Complexity Based Distortion Metric for Perceptual
Point Cloud Quality Assessment [24.936061591860838]
The goal of objective point cloud quality assessment (PCQA) research is to develop metrics that measure point cloud quality in a consistent manner.
We evaluate the point cloud quality by measuring the complexity of transforming the distorted point cloud back to its reference.
The effectiveness of the proposed transformational complexity based distortion metric (TCDM) is evaluated through extensive experiments conducted on five public point cloud quality assessment databases.
arXiv Detail & Related papers (2022-10-10T13:20:51Z) - MM-PCQA: Multi-Modal Learning for No-reference Point Cloud Quality
Assessment [32.495387943305204]
We propose a novel no-reference point cloud quality assessment (NR-PCQA) metric in a multi-modal fashion.
In specific, we split the point clouds into sub-models to represent local geometry distortions such as point shift and down-sampling.
To achieve the goals, the sub-models and projected images are encoded with point-based and image-based neural networks.
arXiv Detail & Related papers (2022-09-01T06:11:12Z) - GFNet: Geometric Flow Network for 3D Point Cloud Semantic Segmentation [91.15865862160088]
We introduce a geometric flow network (GFNet) to explore the geometric correspondence between different views in an align-before-fuse manner.
Specifically, we devise a novel geometric flow module (GFM) to bidirectionally align and propagate the complementary information across different views.
arXiv Detail & Related papers (2022-07-06T11:48:08Z) - Self-Supervised Visibility Learning for Novel View Synthesis [79.53158728483375]
Conventional rendering methods estimate scene geometry and synthesize novel views in two separate steps.
We propose an end-to-end NVS framework to eliminate the error propagation issue.
Our network is trained in an end-to-end self-supervised fashion, thus significantly alleviating error accumulation in view synthesis.
arXiv Detail & Related papers (2021-03-29T08:11:25Z) - PC-RGNN: Point Cloud Completion and Graph Neural Network for 3D Object
Detection [57.49788100647103]
LiDAR-based 3D object detection is an important task for autonomous driving.
Current approaches suffer from sparse and partial point clouds of distant and occluded objects.
In this paper, we propose a novel two-stage approach, namely PC-RGNN, dealing with such challenges by two specific solutions.
arXiv Detail & Related papers (2020-12-18T18:06:43Z) - Enabling Viewpoint Learning through Dynamic Label Generation [10.228754362756153]
We show how our proposed approach allows for learning viewpoint predictions for models from different object categories.
We show that prediction times are reduced from several minutes to a fraction of a second, as compared to state-of-the-art (SOTA) viewpoint quality evaluation.
arXiv Detail & Related papers (2020-03-10T11:49:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.