PREF: Phasorial Embedding Fields for Compact Neural Representations
- URL: http://arxiv.org/abs/2205.13524v1
- Date: Thu, 26 May 2022 17:43:03 GMT
- Title: PREF: Phasorial Embedding Fields for Compact Neural Representations
- Authors: Binbin Huang, Xinhao Yan, Anpei Chen, Shenghua Gao, Jingyi Yu
- Abstract summary: We present a phasorial embedding field emphPREF as a compact representation to facilitate neural signal modeling and reconstruction tasks.
Our experiments show PREF-based neural signal processing technique is on par with the state-of-the-art in 2D image completion, 3D SDF surface regression, and 5D radiance field reconstruction.
- Score: 54.44527545923917
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We present a phasorial embedding field \emph{PREF} as a compact
representation to facilitate neural signal modeling and reconstruction tasks.
Pure multi-layer perceptron (MLP) based neural techniques are biased towards
low frequency signals and have relied on deep layers or Fourier encoding to
avoid losing details. PREF instead employs a compact and physically explainable
encoding field based on the phasor formulation of the Fourier embedding space.
We conduct a comprehensive theoretical analysis to demonstrate the advantages
of PREF over the latest spatial embedding techniques. We then develop a highly
efficient frequency learning framework using an approximated inverse Fourier
transform scheme for PREF along with a novel Parseval regularizer. Extensive
experiments show our compact PREF-based neural signal processing technique is
on par with the state-of-the-art in 2D image completion, 3D SDF surface
regression, and 5D radiance field reconstruction.
Related papers
- CVT-xRF: Contrastive In-Voxel Transformer for 3D Consistent Radiance Fields from Sparse Inputs [65.80187860906115]
We propose a novel approach to improve NeRF's performance with sparse inputs.
We first adopt a voxel-based ray sampling strategy to ensure that the sampled rays intersect with a certain voxel in 3D space.
We then randomly sample additional points within the voxel and apply a Transformer to infer the properties of other points on each ray, which are then incorporated into the volume rendering.
arXiv Detail & Related papers (2024-03-25T15:56:17Z) - On Optimal Sampling for Learning SDF Using MLPs Equipped with Positional
Encoding [79.67071790034609]
We devise a tool to determine the appropriate sampling rate for learning an accurate neural implicit field without undesirable side effects.
It is observed that a PE-equipped has an intrinsic frequency much higher than the highest frequency component in the PE layer.
We empirically show in the setting of SDF fitting, this recommended sampling rate is sufficient to secure accurate fitting results.
arXiv Detail & Related papers (2024-01-02T10:51:52Z) - FPO++: Efficient Encoding and Rendering of Dynamic Neural Radiance Fields by Analyzing and Enhancing Fourier PlenOctrees [3.5884936187733403]
Fourier PlenOctrees have shown to be an efficient representation for real-time rendering of dynamic Neural Radiance Fields (NeRF)
In this paper, we perform an in-depth analysis of these artifacts and leverage the resulting insights to propose an improved representation.
arXiv Detail & Related papers (2023-10-31T17:59:58Z) - RecFNO: a resolution-invariant flow and heat field reconstruction method
from sparse observations via Fourier neural operator [8.986743262828009]
We propose an end-to-end physical field reconstruction method with both excellent performance and mesh transferability named RecFNO.
The proposed method aims to learn the mapping from sparse observations to flow and heat field in infinite-dimensional space.
The experiments conducted on fluid mechanics and thermology problems show that the proposed method outperforms existing POD-based and CNN-based methods in most cases.
arXiv Detail & Related papers (2023-02-20T07:20:22Z) - Polynomial Neural Fields for Subband Decomposition and Manipulation [78.2401411189246]
We propose a new class of neural fields called neural fields (PNFs)
The key advantage of a PNF is that it can represent a signal as a composition of manipulable and interpretable components without losing the merits of neural fields.
We empirically demonstrate that Fourier PNFs enable signal manipulation applications such as texture transfer and scale-space.
arXiv Detail & Related papers (2023-02-09T18:59:04Z) - Factor Fields: A Unified Framework for Neural Fields and Beyond [50.29013417187368]
We present Factor Fields, a novel framework for modeling and representing signals.
Our framework accommodates several recent signal representations including NeRF, Plenoxels, EG3D, Instant-NGP, and TensoRF.
Our representation achieves better image approximation quality on 2D image regression tasks, higher geometric quality when reconstructing 3D signed distance fields, and higher compactness for radiance field reconstruction tasks.
arXiv Detail & Related papers (2023-02-02T17:06:50Z) - Neural Fourier Filter Bank [18.52741992605852]
We present a novel method to provide efficient and highly detailed reconstructions.
Inspired by wavelets, we learn a neural field that decompose the signal both spatially and frequency-wise.
arXiv Detail & Related papers (2022-12-04T03:45:08Z) - QFF: Quantized Fourier Features for Neural Field Representations [28.82293263445964]
We show that using Quantized Fourier Features (QFF) can result in smaller model size, faster training, and better quality outputs for several applications.
QFF are easy to code, fast to compute, and serve as a simple drop-in addition to many neural field representations.
arXiv Detail & Related papers (2022-12-02T00:11:22Z) - Fourier Features Let Networks Learn High Frequency Functions in Low
Dimensional Domains [69.62456877209304]
We show that passing input points through a simple Fourier feature mapping enables a multilayer perceptron to learn high-frequency functions.
Results shed light on advances in computer vision and graphics that achieve state-of-the-art results.
arXiv Detail & Related papers (2020-06-18T17:59:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.