MoEC: Mixture of Experts Implicit Neural Compression
- URL: http://arxiv.org/abs/2312.01361v1
- Date: Sun, 3 Dec 2023 12:02:23 GMT
- Title: MoEC: Mixture of Experts Implicit Neural Compression
- Authors: Jianchen Zhao, Cheng-Ching Tseng, Ming Lu, Ruichuan An, Xiaobao Wei,
He Sun, Shanghang Zhang
- Abstract summary: We propose MoEC, a novel implicit neural compression method based on the theory of mixture of experts.
Specifically, we use a gating network to automatically assign a specific INR to a 3D point in the scene.
Compared with block-wise and tree-structured partitions, our learnable partition can adaptively find the optimal partition in an end-to-end manner.
- Score: 25.455216041289432
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Emerging Implicit Neural Representation (INR) is a promising data compression
technique, which represents the data using the parameters of a Deep Neural
Network (DNN). Existing methods manually partition a complex scene into local
regions and overfit the INRs into those regions. However, manually designing
the partition scheme for a complex scene is very challenging and fails to
jointly learn the partition and INRs. To solve the problem, we propose MoEC, a
novel implicit neural compression method based on the theory of mixture of
experts. Specifically, we use a gating network to automatically assign a
specific INR to a 3D point in the scene. The gating network is trained jointly
with the INRs of different local regions. Compared with block-wise and
tree-structured partitions, our learnable partition can adaptively find the
optimal partition in an end-to-end manner. We conduct detailed experiments on
massive and diverse biomedical data to demonstrate the advantages of MoEC
against existing approaches. In most of experiment settings, we have achieved
state-of-the-art results. Especially in cases of extreme compression ratios,
such as 6000x, we are able to uphold the PSNR of 48.16.
Related papers
- Neural Experts: Mixture of Experts for Implicit Neural Representations [41.395193251292895]
Implicit neural representations (INRs) have proven effective in various tasks including image, shape, audio, and video reconstruction.
We propose a mixture of experts (MoE) implicit neural representation approach that enables learning local piece-wise continuous functions.
We show that incorporating a mixture of experts architecture into existing INR formulations provides a boost in speed, accuracy, and memory requirements.
arXiv Detail & Related papers (2024-10-29T01:11:25Z) - N-BVH: Neural ray queries with bounding volume hierarchies [51.430495562430565]
In 3D computer graphics, the bulk of a scene's memory usage is due to polygons and textures.
We devise N-BVH, a neural compression architecture designed to answer arbitrary ray queries in 3D.
Our method provides faithful approximations of visibility, depth, and appearance attributes.
arXiv Detail & Related papers (2024-05-25T13:54:34Z) - ECNR: Efficient Compressive Neural Representation of Time-Varying
Volumetric Datasets [6.3492793442257085]
compressive neural representation has emerged as a promising alternative to traditional compression methods for managing massive datasets.
This paper presents an efficient neural representation (ECNR) solution for time-varying data compression.
We show the effectiveness of ECNR with multiple datasets and compare it with state-of-the-art compression methods.
arXiv Detail & Related papers (2023-10-02T06:06:32Z) - Deep Graph Neural Networks via Posteriori-Sampling-based Node-Adaptive Residual Module [65.81781176362848]
Graph Neural Networks (GNNs) can learn from graph-structured data through neighborhood information aggregation.
As the number of layers increases, node representations become indistinguishable, which is known as over-smoothing.
We propose a textbfPosterior-Sampling-based, Node-distinguish Residual module (PSNR).
arXiv Detail & Related papers (2023-05-09T12:03:42Z) - Modality-Agnostic Variational Compression of Implicit Neural
Representations [96.35492043867104]
We introduce a modality-agnostic neural compression algorithm based on a functional view of data and parameterised as an Implicit Neural Representation (INR)
Bridging the gap between latent coding and sparsity, we obtain compact latent representations non-linearly mapped to a soft gating mechanism.
After obtaining a dataset of such latent representations, we directly optimise the rate/distortion trade-off in a modality-agnostic space using neural compression.
arXiv Detail & Related papers (2023-01-23T15:22:42Z) - TINC: Tree-structured Implicit Neural Compression [30.26398911800582]
Implicit neural representation (INR) can describe the target scenes with high fidelity using a small number of parameters.
Preliminary studies can only exploit either global or local correlation in the target data.
We propose a Tree-structured Implicit Neural Compression (TINC) to conduct compact representation for local regions.
arXiv Detail & Related papers (2022-11-12T15:39:07Z) - Neural Implicit Dictionary via Mixture-of-Expert Training [111.08941206369508]
We present a generic INR framework that achieves both data and training efficiency by learning a Neural Implicit Dictionary (NID)
Our NID assembles a group of coordinate-based Impworks which are tuned to span the desired function space.
Our experiments show that, NID can improve reconstruction of 2D images or 3D scenes by 2 orders of magnitude faster with up to 98% less input data.
arXiv Detail & Related papers (2022-07-08T05:07:19Z) - Partition-Guided GANs [63.980473635585234]
We design a partitioner that breaks the space into smaller regions, each having a simpler distribution, and training a different generator for each partition.
This is done in an unsupervised manner without requiring any labels.
Experimental results on various standard benchmarks show that the proposed unsupervised model outperforms several recent methods.
arXiv Detail & Related papers (2021-04-02T00:06:53Z) - Deep Networks for Direction-of-Arrival Estimation in Low SNR [89.45026632977456]
We introduce a Convolutional Neural Network (CNN) that is trained from mutli-channel data of the true array manifold matrix.
We train a CNN in the low-SNR regime to predict DoAs across all SNRs.
Our robust solution can be applied in several fields, ranging from wireless array sensors to acoustic microphones or sonars.
arXiv Detail & Related papers (2020-11-17T12:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.