CARNet:Compression Artifact Reduction for Point Cloud Attribute
- URL: http://arxiv.org/abs/2209.08276v1
- Date: Sat, 17 Sep 2022 08:05:35 GMT
- Title: CARNet:Compression Artifact Reduction for Point Cloud Attribute
- Authors: Dandan Ding, Junzhe Zhang, Jianqiang Wang, Zhan Ma
- Abstract summary: A learning-based adaptive loop filter is developed for the Geometry-based Point Cloud Compression (G-PCC) standard to reduce compression artifacts.
The proposed method first generates multiple Most-Probable Sample Offsets (MPSOs) as potential compression distortion approximations, and then linearly weights them for artifact mitigation.
- Score: 37.78660069355263
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A learning-based adaptive loop filter is developed for the Geometry-based
Point Cloud Compression (G-PCC) standard to reduce attribute compression
artifacts. The proposed method first generates multiple Most-Probable Sample
Offsets (MPSOs) as potential compression distortion approximations, and then
linearly weights them for artifact mitigation. As such, we drive the filtered
reconstruction as close to the uncompressed PCA as possible. To this end, we
devise a Compression Artifact Reduction Network (CARNet) which consists of two
consecutive processing phases: MPSOs derivation and MPSOs combination. The
MPSOs derivation uses a two-stream network to model local neighborhood
variations from direct spatial embedding and frequency-dependent embedding,
where sparse convolutions are utilized to best aggregate information from
sparsely and irregularly distributed points. The MPSOs combination is guided by
the least square error metric to derive weighting coefficients on the fly to
further capture content dynamics of input PCAs. The CARNet is implemented as an
in-loop filtering tool of the GPCC, where those linear weighting coefficients
are encapsulated into the bitstream with negligible bit rate overhead.
Experimental results demonstrate significant improvement over the latest GPCC
both subjectively and objectively.
Related papers
- SPAC: Sampling-based Progressive Attribute Compression for Dense Point Clouds [51.313922535437726]
We propose an end-to-end compression method for dense point clouds.
The proposed method combines a frequency sampling module, an adaptive scale feature extraction module with geometry assistance, and a global hyperprior entropy model.
arXiv Detail & Related papers (2024-09-16T13:59:43Z) - Efficient and Generic Point Model for Lossless Point Cloud Attribute Compression [28.316347464011056]
PoLoPCAC is an efficient and generic PCAC method that achieves high compression efficiency and strong generalizability simultaneously.
Our method can be instantly deployed once trained on a Synthetic 2k-ShapeNet dataset.
Experiments show that our method can enjoy continuous bit-rate reduction over the latest G-PCCv23 on various datasets.
arXiv Detail & Related papers (2024-04-10T11:40:02Z) - Point Cloud Compression via Constrained Optimal Transport [10.795619052889952]
COT-PCC takes compressed features as an extra constraint of optimal transport.
It learns the distribution transformation between original and reconstructed points.
COT-PCC outperforms state-of-the-art methods in terms of both CD and PSNR metrics.
arXiv Detail & Related papers (2024-03-13T04:36:24Z) - Filter Pruning for Efficient CNNs via Knowledge-driven Differential
Filter Sampler [103.97487121678276]
Filter pruning simultaneously accelerates the computation and reduces the memory overhead of CNNs.
We propose a novel Knowledge-driven Differential Filter Sampler(KDFS) with Masked Filter Modeling(MFM) framework for filter pruning.
arXiv Detail & Related papers (2023-07-01T02:28:41Z) - Asymptotic Soft Cluster Pruning for Deep Neural Networks [5.311178623385279]
Filter pruning method introduces structural sparsity by removing selected filters.
We propose a novel filter pruning method called Asymptotic Soft Cluster Pruning.
Our method can achieve competitive results compared with many state-of-the-art algorithms.
arXiv Detail & Related papers (2022-06-16T13:58:58Z) - Sparse Tensor-based Multiscale Representation for Point Cloud Geometry
Compression [18.24902526033056]
We develop a unified Point Cloud Geometry (PCG) compression method through Sparse Processing (STP) based multiscale representation of voxelized PCG.
Applying the complexity reduces the complexity significantly because it only performs the convolutions centered at Most-Probable Positively-Occupied Voxels (MP-POV)
The proposed method presents lightweight complexity due to point-wise, and tiny storage desire because of model sharing across all scales.
arXiv Detail & Related papers (2021-11-20T17:02:45Z) - Dynamic Probabilistic Pruning: A general framework for
hardware-constrained pruning at different granularities [80.06422693778141]
We propose a flexible new pruning mechanism that facilitates pruning at different granularities (weights, kernels, filters/feature maps)
We refer to this algorithm as Dynamic Probabilistic Pruning (DPP)
We show that DPP achieves competitive compression rates and classification accuracy when pruning common deep learning models trained on different benchmark datasets for image classification.
arXiv Detail & Related papers (2021-05-26T17:01:52Z) - Manifold Regularized Dynamic Network Pruning [102.24146031250034]
This paper proposes a new paradigm that dynamically removes redundant filters by embedding the manifold information of all instances into the space of pruned networks.
The effectiveness of the proposed method is verified on several benchmarks, which shows better performance in terms of both accuracy and computational cost.
arXiv Detail & Related papers (2021-03-10T03:59:03Z) - Unfolding Neural Networks for Compressive Multichannel Blind
Deconvolution [71.29848468762789]
We propose a learned-structured unfolding neural network for the problem of compressive sparse multichannel blind-deconvolution.
In this problem, each channel's measurements are given as convolution of a common source signal and sparse filter.
We demonstrate that our method is superior to classical structured compressive sparse multichannel blind-deconvolution methods in terms of accuracy and speed of sparse filter recovery.
arXiv Detail & Related papers (2020-10-22T02:34:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.