FINER: Flexible spectral-bias tuning in Implicit NEural Representation
by Variable-periodic Activation Functions
- URL: http://arxiv.org/abs/2312.02434v1
- Date: Tue, 5 Dec 2023 02:23:41 GMT
- Title: FINER: Flexible spectral-bias tuning in Implicit NEural Representation
by Variable-periodic Activation Functions
- Authors: Zhen Liu, Hao Zhu, Qi Zhang, Jingde Fu, Weibing Deng, Zhan Ma, Yanwen
Guo, Xun Cao
- Abstract summary: Implicit Neural Representation is causing a revolution in the field of signal processing.
Current INR techniques suffer from a restricted capability to tune their supported frequency set.
We propose variable-periodic activation functions, for which we propose FINER.
We demonstrate the capabilities of FINER in the contexts of 2D image fitting, 3D signed distance field representation, and 5D neural fields radiance optimization.
- Score: 40.80112550091512
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Implicit Neural Representation (INR), which utilizes a neural network to map
coordinate inputs to corresponding attributes, is causing a revolution in the
field of signal processing. However, current INR techniques suffer from a
restricted capability to tune their supported frequency set, resulting in
imperfect performance when representing complex signals with multiple
frequencies. We have identified that this frequency-related problem can be
greatly alleviated by introducing variable-periodic activation functions, for
which we propose FINER. By initializing the bias of the neural network within
different ranges, sub-functions with various frequencies in the
variable-periodic function are selected for activation. Consequently, the
supported frequency set of FINER can be flexibly tuned, leading to improved
performance in signal representation. We demonstrate the capabilities of FINER
in the contexts of 2D image fitting, 3D signed distance field representation,
and 5D neural radiance fields optimization, and we show that it outperforms
existing INRs.
Related papers
- Implicit Neural Representations with Fourier Kolmogorov-Arnold Networks [4.499833362998488]
Implicit neural representations (INRs) use neural networks to provide continuous and resolution-independent representations of complex signals.
The proposed FKAN utilizes learnable activation functions modeled as Fourier series in the first layer to effectively control and learn the task-specific frequency components.
Experimental results show that our proposed FKAN model outperforms three state-of-the-art baseline schemes.
arXiv Detail & Related papers (2024-09-14T05:53:33Z) - FINER++: Building a Family of Variable-periodic Functions for Activating Implicit Neural Representation [39.116375158815515]
Implicit Neural Representation (INR) is causing a revolution in the field of signal processing.
INR techniques suffer from the "frequency"-specified spectral bias and capacity-convergence gap.
We propose the FINER++ framework by extending existing periodic/non-periodic activation functions to variable-periodic ones.
arXiv Detail & Related papers (2024-07-28T09:24:57Z) - Fourier-enhanced Implicit Neural Fusion Network for Multispectral and Hyperspectral Image Fusion [12.935592400092712]
Implicit neural representations (INR) have made significant strides in various vision-related domains.
INR is prone to losing high-frequency information and is confined to the lack of global perceptual capabilities.
This paper introduces a Fourier-enhanced Implicit Neural Fusion Network (FeINFN) specifically designed for MHIF task.
arXiv Detail & Related papers (2024-04-23T16:14:20Z) - Locality-Aware Generalizable Implicit Neural Representation [54.93702310461174]
Generalizable implicit neural representation (INR) enables a single continuous function to represent multiple data instances.
We propose a novel framework for generalizable INR that combines a transformer encoder with a locality-aware INR decoder.
Our framework significantly outperforms previous generalizable INRs and validates the usefulness of the locality-aware latents for downstream tasks.
arXiv Detail & Related papers (2023-10-09T11:26:58Z) - NeuRBF: A Neural Fields Representation with Adaptive Radial Basis
Functions [93.02515761070201]
We present a novel type of neural fields that uses general radial bases for signal representation.
Our method builds upon general radial bases with flexible kernel position and shape, which have higher spatial adaptivity and can more closely fit target signals.
When applied to neural radiance field reconstruction, our method achieves state-of-the-art rendering quality, with small model size and comparable training speed.
arXiv Detail & Related papers (2023-09-27T06:32:05Z) - ResFields: Residual Neural Fields for Spatiotemporal Signals [61.44420761752655]
ResFields is a novel class of networks specifically designed to effectively represent complex temporal signals.
We conduct comprehensive analysis of the properties of ResFields and propose a matrix factorization technique to reduce the number of trainable parameters.
We demonstrate the practical utility of ResFields by showcasing its effectiveness in capturing dynamic 3D scenes from sparse RGBD cameras.
arXiv Detail & Related papers (2023-09-06T16:59:36Z) - SPDER: Semiperiodic Damping-Enabled Object Representation [7.4297019016687535]
We present a neural network architecture designed to naturally learn a positional embedding.
The proposed architecture, SPDER, is a simple that uses an activation function composed of a sinusoidal multiplied by a sublinear function.
Our results indicate that SPDERs speed up training by 10x and converge to losses 1,500-50,000x lower than that of the state-of-the-art for image representation.
arXiv Detail & Related papers (2023-06-27T06:49:40Z) - Versatile Neural Processes for Learning Implicit Neural Representations [57.090658265140384]
We propose Versatile Neural Processes (VNP), which largely increases the capability of approximating functions.
Specifically, we introduce a bottleneck encoder that produces fewer and informative context tokens, relieving the high computational cost.
We demonstrate the effectiveness of the proposed VNP on a variety of tasks involving 1D, 2D and 3D signals.
arXiv Detail & Related papers (2023-01-21T04:08:46Z) - DINER: Disorder-Invariant Implicit Neural Representation [33.10256713209207]
Implicit neural representation (INR) characterizes the attributes of a signal as a function of corresponding coordinates.
We propose the disorder-invariant implicit neural representation (DINER) by augmenting a hash-table to a traditional INR backbone.
arXiv Detail & Related papers (2022-11-15T03:34:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.