Sobolev Training for Implicit Neural Representations with Approximated
Image Derivatives
- URL: http://arxiv.org/abs/2207.10395v1
- Date: Thu, 21 Jul 2022 10:12:41 GMT
- Title: Sobolev Training for Implicit Neural Representations with Approximated
Image Derivatives
- Authors: Wentao Yuan, Qingtian Zhu, Xiangyue Liu, Yikang Ding, Haotian Zhang,
Chi Zhang
- Abstract summary: Implicit Neural Representations (INRs) parameterized by neural networks have emerged as a powerful tool to represent different kinds of signals.
We propose a training paradigm for INRs whose target output is image pixels, to encode image derivatives in addition to image values in the neural network.
We show how the training paradigm can be leveraged to solve typical INRs problems, i.e., image regression and inverse rendering.
- Score: 12.71676484494428
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, Implicit Neural Representations (INRs) parameterized by neural
networks have emerged as a powerful and promising tool to represent different
kinds of signals due to its continuous, differentiable properties, showing
superiorities to classical discretized representations. However, the training
of neural networks for INRs only utilizes input-output pairs, and the
derivatives of the target output with respect to the input, which can be
accessed in some cases, are usually ignored. In this paper, we propose a
training paradigm for INRs whose target output is image pixels, to encode image
derivatives in addition to image values in the neural network. Specifically, we
use finite differences to approximate image derivatives. We show how the
training paradigm can be leveraged to solve typical INRs problems, i.e., image
regression and inverse rendering, and demonstrate this training paradigm can
improve the data-efficiency and generalization capabilities of INRs. The code
of our method is available at
\url{https://github.com/megvii-research/Sobolev_INRs}.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Generalizable Neural Fields as Partially Observed Neural Processes [16.202109517569145]
We propose a new paradigm that views the large-scale training of neural representations as a part of a partially-observed neural process framework.
We demonstrate that this approach outperforms both state-of-the-art gradient-based meta-learning approaches and hypernetwork approaches.
arXiv Detail & Related papers (2023-09-13T01:22:16Z) - Modality-Agnostic Variational Compression of Implicit Neural
Representations [96.35492043867104]
We introduce a modality-agnostic neural compression algorithm based on a functional view of data and parameterised as an Implicit Neural Representation (INR)
Bridging the gap between latent coding and sparsity, we obtain compact latent representations non-linearly mapped to a soft gating mechanism.
After obtaining a dataset of such latent representations, we directly optimise the rate/distortion trade-off in a modality-agnostic space using neural compression.
arXiv Detail & Related papers (2023-01-23T15:22:42Z) - Traditional Classification Neural Networks are Good Generators: They are
Competitive with DDPMs and GANs [104.72108627191041]
We show that conventional neural network classifiers can generate high-quality images comparable to state-of-the-art generative models.
We propose a mask-based reconstruction module to make semantic gradients-aware to synthesize plausible images.
We show that our method is also applicable to text-to-image generation by regarding image-text foundation models.
arXiv Detail & Related papers (2022-11-27T11:25:35Z) - Signal Processing for Implicit Neural Representations [80.38097216996164]
Implicit Neural Representations (INRs) encode continuous multi-media data via multi-layer perceptrons.
Existing works manipulate such continuous representations via processing on their discretized instance.
We propose an implicit neural signal processing network, dubbed INSP-Net, via differential operators on INR.
arXiv Detail & Related papers (2022-10-17T06:29:07Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Meta-Learning Sparse Implicit Neural Representations [69.15490627853629]
Implicit neural representations are a promising new avenue of representing general signals.
Current approach is difficult to scale for a large number of signals or a data set.
We show that meta-learned sparse neural representations achieve a much smaller loss than dense meta-learned models.
arXiv Detail & Related papers (2021-10-27T18:02:53Z) - Neural Knitworks: Patched Neural Implicit Representation Networks [1.0470286407954037]
We propose Knitwork, an architecture for neural implicit representation learning of natural images that achieves image synthesis.
To the best of our knowledge, this is the first implementation of a coordinate-based patch tailored for synthesis tasks such as image inpainting, super-resolution, and denoising.
The results show that modeling natural images using patches, rather than pixels, produces results of higher fidelity.
arXiv Detail & Related papers (2021-09-29T13:10:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.