Quality evaluation of point clouds: a novel no-reference approach using
transformer-based architecture
- URL: http://arxiv.org/abs/2303.08634v1
- Date: Wed, 15 Mar 2023 14:01:12 GMT
- Title: Quality evaluation of point clouds: a novel no-reference approach using
transformer-based architecture
- Authors: Marouane Tliba, Aladine Chetouani, Giuseppe Valenzise and Frederic
Dufaux
- Abstract summary: We propose a novel no-reference quality metric that operates directly on the whole point cloud without requiring extensive pre-processing.
We use a novel model design consisting primarily of cross and self-attention layers, in order to learn the best set of local semantic affinities.
- Score: 11.515951211296361
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: With the increased interest in immersive experiences, point cloud came to
birth and was widely adopted as the first choice to represent 3D media. Besides
several distortions that could affect the 3D content spanning from acquisition
to rendering, efficient transmission of such volumetric content over
traditional communication systems stands at the expense of the delivered
perceptual quality. To estimate the magnitude of such degradation, employing
quality metrics became an inevitable solution. In this work, we propose a novel
deep-based no-reference quality metric that operates directly on the whole
point cloud without requiring extensive pre-processing, enabling real-time
evaluation over both transmission and rendering levels. To do so, we use a
novel model design consisting primarily of cross and self-attention layers, in
order to learn the best set of local semantic affinities while keeping the best
combination of geometry and color information in multiple levels from basic
features extraction to deep representation modeling.
Related papers
- Rendering-Oriented 3D Point Cloud Attribute Compression using Sparse Tensor-based Transformer [52.40992954884257]
3D visualization techniques have fundamentally transformed how we interact with digital content.
Massive data size of point clouds presents significant challenges in data compression.
We propose an end-to-end deep learning framework that seamlessly integrates PCAC with differentiable rendering.
arXiv Detail & Related papers (2024-11-12T16:12:51Z) - Boosting Cross-Domain Point Classification via Distilling Relational Priors from 2D Transformers [59.0181939916084]
Traditional 3D networks mainly focus on local geometric details and ignore the topological structure between local geometries.
We propose a novel Priors Distillation (RPD) method to extract priors from the well-trained transformers on massive images.
Experiments on the PointDA-10 and the Sim-to-Real datasets verify that the proposed method consistently achieves the state-of-the-art performance of UDA for point cloud classification.
arXiv Detail & Related papers (2024-07-26T06:29:09Z) - Simple Baselines for Projection-based Full-reference and No-reference
Point Cloud Quality Assessment [60.2709006613171]
We propose simple baselines for projection-based point cloud quality assessment (PCQA)
We use multi-projections obtained via a common cube-like projection process from the point clouds for both full-reference (FR) and no-reference (NR) PCQA tasks.
Taking part in the ICIP 2023 PCVQA Challenge, we succeeded in achieving the top spot in four out of the five competition tracks.
arXiv Detail & Related papers (2023-10-26T04:42:57Z) - Neural Progressive Meshes [54.52990060976026]
We propose a method to transmit 3D meshes with a shared learned generative space.
We learn this space using a subdivision-based encoder-decoder architecture trained in advance on a large collection of surfaces.
We evaluate our method on a diverse set of complex 3D shapes and demonstrate that it outperforms baselines in terms of compression ratio and reconstruction quality.
arXiv Detail & Related papers (2023-08-10T17:58:02Z) - PatchMixer: Rethinking network design to boost generalization for 3D
point cloud understanding [2.512827436728378]
We argue that the ability of a model to transfer the learnt knowledge to different domains is an important feature that should be evaluated to exhaustively assess the quality of a deep network architecture.
In this work we propose PatchMixer, a simple yet effective architecture that extends the ideas behind the recent paper to 3D point clouds.
arXiv Detail & Related papers (2023-07-28T17:37:53Z) - StarNet: Style-Aware 3D Point Cloud Generation [82.30389817015877]
StarNet is able to reconstruct and generate high-fidelity and even 3D point clouds using a mapping network.
Our framework achieves comparable state-of-the-art performance on various metrics in the point cloud reconstruction and generation tasks.
arXiv Detail & Related papers (2023-03-28T08:21:44Z) - Reduced-Reference Quality Assessment of Point Clouds via
Content-Oriented Saliency Projection [17.983188216548005]
Many dense 3D point clouds have been exploited to represent visual objects instead of traditional images or videos.
We propose a novel and efficient Reduced-Reference quality metric for point clouds.
arXiv Detail & Related papers (2023-01-18T18:00:29Z) - MM-PCQA: Multi-Modal Learning for No-reference Point Cloud Quality
Assessment [32.495387943305204]
We propose a novel no-reference point cloud quality assessment (NR-PCQA) metric in a multi-modal fashion.
In specific, we split the point clouds into sub-models to represent local geometry distortions such as point shift and down-sampling.
To achieve the goals, the sub-models and projected images are encoded with point-based and image-based neural networks.
arXiv Detail & Related papers (2022-09-01T06:11:12Z) - Deep Point Cloud Reconstruction [74.694733918351]
Point cloud obtained from 3D scanning is often sparse, noisy, and irregular.
To cope with these issues, recent studies have been separately conducted to densify, denoise, and complete inaccurate point cloud.
We propose a deep point cloud reconstruction network consisting of two stages: 1) a 3D sparse stacked-hourglass network as for the initial densification and denoising, 2) a refinement via transformers converting the discrete voxels into 3D points.
arXiv Detail & Related papers (2021-11-23T07:53:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.