Deep probabilistic model for lossless scalable point cloud attribute
compression
- URL: http://arxiv.org/abs/2303.06517v1
- Date: Sat, 11 Mar 2023 23:39:30 GMT
- Title: Deep probabilistic model for lossless scalable point cloud attribute
compression
- Authors: Dat Thanh Nguyen, Kamal Gopikrishnan Nambiar and Andre Kaup
- Abstract summary: We build an end-to-end point cloud attribute coding method (MNeT) that progressively projects the attributes onto multiscale latent spaces.
We validate our method on a set of point clouds from MVUB and MPEG and show that our method outperforms recently proposed methods and on par with the latest G-PCC version 14.
- Score: 2.2559617939136505
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, several point cloud geometry compression methods that
utilize advanced deep learning techniques have been proposed, but there are
limited works on attribute compression, especially lossless compression. In
this work, we build an end-to-end multiscale point cloud attribute coding
method (MNeT) that progressively projects the attributes onto multiscale latent
spaces. The multiscale architecture provides an accurate context for the
attribute probability modeling and thus minimizes the coding bitrate with a
single network prediction. Besides, our method allows scalable coding that
lower quality versions can be easily extracted from the losslessly compressed
bitstream. We validate our method on a set of point clouds from MVUB and MPEG
and show that our method outperforms recently proposed methods and on par with
the latest G-PCC version 14. Besides, our coding time is substantially faster
than G-PCC.
Related papers
- Point Cloud Compression with Bits-back Coding [32.9521748764196]
This paper specializes in using a deep learning-based probabilistic model to estimate the Shannon's entropy of the point cloud information.
Once the entropy of the point cloud dataset is estimated, we use the learned CVAE model to compress the geometric attributes of the point clouds.
The novelty of our method with bits-back coding specializes in utilizing the learned latent variable model of the CVAE to compress the point cloud data.
arXiv Detail & Related papers (2024-10-09T06:34:48Z) - SPAC: Sampling-based Progressive Attribute Compression for Dense Point Clouds [51.313922535437726]
We propose an end-to-end compression method for dense point clouds.
The proposed method combines a frequency sampling module, an adaptive scale feature extraction module with geometry assistance, and a global hyperprior entropy model.
arXiv Detail & Related papers (2024-09-16T13:59:43Z) - End-to-end learned Lossy Dynamic Point Cloud Attribute Compression [5.717288278431968]
This study introduces an end-to-end learned dynamic lossy attribute coding approach.
We employ a context model that leverage previous latent space in conjunction with an auto-regressive context model for encoding the latent tensor into a bitstream.
arXiv Detail & Related papers (2024-08-20T09:06:59Z) - Efficient and Generic Point Model for Lossless Point Cloud Attribute Compression [28.316347464011056]
PoLoPCAC is an efficient and generic PCAC method that achieves high compression efficiency and strong generalizability simultaneously.
Our method can be instantly deployed once trained on a Synthetic 2k-ShapeNet dataset.
Experiments show that our method can enjoy continuous bit-rate reduction over the latest G-PCCv23 on various datasets.
arXiv Detail & Related papers (2024-04-10T11:40:02Z) - You Can Mask More For Extremely Low-Bitrate Image Compression [80.7692466922499]
Learned image compression (LIC) methods have experienced significant progress during recent years.
LIC methods fail to explicitly explore the image structure and texture components crucial for image compression.
We present DA-Mask that samples visible patches based on the structure and texture of original images.
We propose a simple yet effective masked compression model (MCM), the first framework that unifies LIC and LIC end-to-end for extremely low-bitrate compression.
arXiv Detail & Related papers (2023-06-27T15:36:22Z) - Geometric Prior Based Deep Human Point Cloud Geometry Compression [67.49785946369055]
We leverage the human geometric prior in geometry redundancy removal of point clouds.
We can envisage high-resolution human point clouds as a combination of geometric priors and structural deviations.
The proposed framework can operate in a play-and-plug fashion with existing learning based point cloud compression methods.
arXiv Detail & Related papers (2023-05-02T10:35:20Z) - ECM-OPCC: Efficient Context Model for Octree-based Point Cloud
Compression [6.509720419113212]
We propose a sufficient yet efficient context model and design an efficient deep learning for point clouds.
Specifically, we first propose a window-constrained multi-group coding strategy to exploit the autoregressive context.
We also propose a dual transformer architecture to utilize the dependency of current node on its ancestors and siblings.
arXiv Detail & Related papers (2022-11-20T09:20:32Z) - Unrolled Compressed Blind-Deconvolution [77.88847247301682]
sparse multichannel blind deconvolution (S-MBD) arises frequently in many engineering applications such as radar/sonar/ultrasound imaging.
We propose a compression method that enables blind recovery from much fewer measurements with respect to the full received signal in time.
arXiv Detail & Related papers (2022-09-28T15:16:58Z) - Neural JPEG: End-to-End Image Compression Leveraging a Standard JPEG
Encoder-Decoder [73.48927855855219]
We propose a system that learns to improve the encoding performance by enhancing its internal neural representations on both the encoder and decoder ends.
Experiments demonstrate that our approach successfully improves the rate-distortion performance over JPEG across various quality metrics.
arXiv Detail & Related papers (2022-01-27T20:20:03Z) - Lossless Coding of Point Cloud Geometry using a Deep Generative Model [11.69103847045569]
Method adaptively partitions a point cloud into multiple voxel block sizes.
Deep auto-regressive generative model estimates occupancy probability of each voxel.
We employ estimated probabilities to code efficiently a block using a context-based arithmetic coder.
arXiv Detail & Related papers (2021-07-01T12:20:22Z) - MuSCLE: Multi Sweep Compression of LiDAR using Deep Entropy Models [78.93424358827528]
We present a novel compression algorithm for reducing the storage streams of LiDAR sensor data.
Our method significantly reduces the joint geometry and intensity over prior state-of-the-art LiDAR compression methods.
arXiv Detail & Related papers (2020-11-15T17:41:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.