Polynomial Neural Fields for Subband Decomposition and Manipulation
- URL: http://arxiv.org/abs/2302.04862v1
- Date: Thu, 9 Feb 2023 18:59:04 GMT
- Title: Polynomial Neural Fields for Subband Decomposition and Manipulation
- Authors: Guandao Yang and Sagie Benaim and Varun Jampani and Kyle Genova and
Jonathan T. Barron and Thomas Funkhouser and Bharath Hariharan and Serge
Belongie
- Abstract summary: We propose a new class of neural fields called neural fields (PNFs)
The key advantage of a PNF is that it can represent a signal as a composition of manipulable and interpretable components without losing the merits of neural fields.
We empirically demonstrate that Fourier PNFs enable signal manipulation applications such as texture transfer and scale-space.
- Score: 78.2401411189246
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural fields have emerged as a new paradigm for representing signals, thanks
to their ability to do it compactly while being easy to optimize. In most
applications, however, neural fields are treated like black boxes, which
precludes many signal manipulation tasks. In this paper, we propose a new class
of neural fields called polynomial neural fields (PNFs). The key advantage of a
PNF is that it can represent a signal as a composition of a number of
manipulable and interpretable components without losing the merits of neural
fields representation. We develop a general theoretical framework to analyze
and design PNFs. We use this framework to design Fourier PNFs, which match
state-of-the-art performance in signal representation tasks that use neural
fields. In addition, we empirically demonstrate that Fourier PNFs enable signal
manipulation applications such as texture transfer and scale-space
interpolation. Code is available at https://github.com/stevenygd/PNF.
Related papers
- BANF: Band-limited Neural Fields for Levels of Detail Reconstruction [28.95113960996025]
We show that via a simple modification, one can obtain neural fields that are low-pass filtered, and in turn show how this can be exploited to obtain a frequency decomposition of the entire signal.
We demonstrate the validity of our technique by investigating level-of-detail reconstruction, and showing how coarser representations can be computed effectively.
arXiv Detail & Related papers (2024-04-19T17:39:50Z) - Deep Learning on Object-centric 3D Neural Fields [19.781070751341154]
We introduce nf2vec, a framework capable of generating a compact latent representation for an input NF in a single inference pass.
We demonstrate that nf2vec effectively embeds 3D objects represented by the input NFs and showcase how the resulting embeddings can be employed in deep learning pipelines.
arXiv Detail & Related papers (2023-12-20T18:56:45Z) - PolyLUT: Learning Piecewise Polynomials for Ultra-Low Latency FPGA
LUT-based Inference [3.1999570171901786]
We show that by using building blocks, we can achieve the same accuracy using fewer layers of soft logic than by using linear functions.
We demonstrate the effectiveness of this approach in three tasks: network intrusion detection, jet identification at the CERN Large Hadron Collider, and handwritten digit recognition using the MNIST dataset.
arXiv Detail & Related papers (2023-09-05T15:54:09Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Versatile Neural Processes for Learning Implicit Neural Representations [57.090658265140384]
We propose Versatile Neural Processes (VNP), which largely increases the capability of approximating functions.
Specifically, we introduce a bottleneck encoder that produces fewer and informative context tokens, relieving the high computational cost.
We demonstrate the effectiveness of the proposed VNP on a variety of tasks involving 1D, 2D and 3D signals.
arXiv Detail & Related papers (2023-01-21T04:08:46Z) - QFF: Quantized Fourier Features for Neural Field Representations [28.82293263445964]
We show that using Quantized Fourier Features (QFF) can result in smaller model size, faster training, and better quality outputs for several applications.
QFF are easy to code, fast to compute, and serve as a simple drop-in addition to many neural field representations.
arXiv Detail & Related papers (2022-12-02T00:11:22Z) - Open- and Closed-Loop Neural Network Verification using Polynomial
Zonotopes [6.591194329459251]
We present a novel approach to efficiently compute tight non-contact activation functions.
In particular, we evaluate the input-output relation of each neuron by an approximation.
This results in a superior performance compared to other methods.
arXiv Detail & Related papers (2022-07-06T14:39:19Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - PREF: Phasorial Embedding Fields for Compact Neural Representations [54.44527545923917]
We present a phasorial embedding field emphPREF as a compact representation to facilitate neural signal modeling and reconstruction tasks.
Our experiments show PREF-based neural signal processing technique is on par with the state-of-the-art in 2D image completion, 3D SDF surface regression, and 5D radiance field reconstruction.
arXiv Detail & Related papers (2022-05-26T17:43:03Z) - The Spectral Bias of Polynomial Neural Networks [63.27903166253743]
Polynomial neural networks (PNNs) have been shown to be particularly effective at image generation and face recognition, where high-frequency information is critical.
Previous studies have revealed that neural networks demonstrate a $textitspectral bias$ towards low-frequency functions, which yields faster learning of low-frequency components during training.
Inspired by such studies, we conduct a spectral analysis of the Tangent Kernel (NTK) of PNNs.
We find that the $Pi$-Net family, i.e., a recently proposed parametrization of PNNs, speeds up the
arXiv Detail & Related papers (2022-02-27T23:12:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.