PREF: Phasorial Embedding Fields for Compact Neural Representations
- URL: http://arxiv.org/abs/2205.13524v1
- Date: Thu, 26 May 2022 17:43:03 GMT
- Title: PREF: Phasorial Embedding Fields for Compact Neural Representations
- Authors: Binbin Huang, Xinhao Yan, Anpei Chen, Shenghua Gao, Jingyi Yu
- Abstract summary: We present a phasorial embedding field emphPREF as a compact representation to facilitate neural signal modeling and reconstruction tasks.
Our experiments show PREF-based neural signal processing technique is on par with the state-of-the-art in 2D image completion, 3D SDF surface regression, and 5D radiance field reconstruction.
- Score: 54.44527545923917
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We present a phasorial embedding field \emph{PREF} as a compact
representation to facilitate neural signal modeling and reconstruction tasks.
Pure multi-layer perceptron (MLP) based neural techniques are biased towards
low frequency signals and have relied on deep layers or Fourier encoding to
avoid losing details. PREF instead employs a compact and physically explainable
encoding field based on the phasor formulation of the Fourier embedding space.
We conduct a comprehensive theoretical analysis to demonstrate the advantages
of PREF over the latest spatial embedding techniques. We then develop a highly
efficient frequency learning framework using an approximated inverse Fourier
transform scheme for PREF along with a novel Parseval regularizer. Extensive
experiments show our compact PREF-based neural signal processing technique is
on par with the state-of-the-art in 2D image completion, 3D SDF surface
regression, and 5D radiance field reconstruction.
Related papers
- Robustifying Fourier Features Embeddings for Implicit Neural Representations [25.725097757343367]
Implicit Neural Representations (INRs) employ neural networks to represent continuous functions by mapping coordinates to the corresponding values of the target function.
INRs face a challenge known as spectral bias when dealing with scenes containing varying frequencies.
We propose the use of multi-layer perceptrons (MLPs) without additive.
arXiv Detail & Related papers (2025-02-08T07:43:37Z) - Optimized Sampling for Non-Line-of-Sight Imaging Using Modified Fast Fourier Transforms [6.866110149269]
Non-line-of-Sight (NLOS) imaging systems collect light at a diffuse relay surface and input this measurement into computational algorithms that output a 3D reconstruction.
These algorithms utilize the Fast Fourier Transform (FFT) to accelerate the reconstruction process but require both input and output to be sampled spatially with uniform grids.
In this work, we demonstrate that existing NLOS imaging setups typically oversample the relay surface spatially, explaining why the measurement can be compressed without sacrificing reconstruction quality.
arXiv Detail & Related papers (2025-01-09T13:52:30Z) - FOF-X: Towards Real-time Detailed Human Reconstruction from a Single Image [68.84221452621674]
We introduce FOF-X for real-time reconstruction of detailed human geometry from a single image.
FOF-X avoids the performance degradation caused by texture and lighting.
We enhance the inter-conversion algorithms between FOF and mesh representations with a Laplacian constraint and an automaton-based discontinuity matcher.
arXiv Detail & Related papers (2024-12-08T14:46:29Z) - CVT-xRF: Contrastive In-Voxel Transformer for 3D Consistent Radiance Fields from Sparse Inputs [65.80187860906115]
We propose a novel approach to improve NeRF's performance with sparse inputs.
We first adopt a voxel-based ray sampling strategy to ensure that the sampled rays intersect with a certain voxel in 3D space.
We then randomly sample additional points within the voxel and apply a Transformer to infer the properties of other points on each ray, which are then incorporated into the volume rendering.
arXiv Detail & Related papers (2024-03-25T15:56:17Z) - Polynomial Neural Fields for Subband Decomposition and Manipulation [78.2401411189246]
We propose a new class of neural fields called neural fields (PNFs)
The key advantage of a PNF is that it can represent a signal as a composition of manipulable and interpretable components without losing the merits of neural fields.
We empirically demonstrate that Fourier PNFs enable signal manipulation applications such as texture transfer and scale-space.
arXiv Detail & Related papers (2023-02-09T18:59:04Z) - Factor Fields: A Unified Framework for Neural Fields and Beyond [50.29013417187368]
We present Factor Fields, a novel framework for modeling and representing signals.
Our framework accommodates several recent signal representations including NeRF, Plenoxels, EG3D, Instant-NGP, and TensoRF.
Our representation achieves better image approximation quality on 2D image regression tasks, higher geometric quality when reconstructing 3D signed distance fields, and higher compactness for radiance field reconstruction tasks.
arXiv Detail & Related papers (2023-02-02T17:06:50Z) - Neural Fourier Filter Bank [18.52741992605852]
We present a novel method to provide efficient and highly detailed reconstructions.
Inspired by wavelets, we learn a neural field that decompose the signal both spatially and frequency-wise.
arXiv Detail & Related papers (2022-12-04T03:45:08Z) - QFF: Quantized Fourier Features for Neural Field Representations [28.82293263445964]
We show that using Quantized Fourier Features (QFF) can result in smaller model size, faster training, and better quality outputs for several applications.
QFF are easy to code, fast to compute, and serve as a simple drop-in addition to many neural field representations.
arXiv Detail & Related papers (2022-12-02T00:11:22Z) - Fourier Features Let Networks Learn High Frequency Functions in Low
Dimensional Domains [69.62456877209304]
We show that passing input points through a simple Fourier feature mapping enables a multilayer perceptron to learn high-frequency functions.
Results shed light on advances in computer vision and graphics that achieve state-of-the-art results.
arXiv Detail & Related papers (2020-06-18T17:59:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.