Synergistic Integration of Coordinate Network and Tensorial Feature for Improving Neural Radiance Fields from Sparse Inputs
- URL: http://arxiv.org/abs/2405.07857v3
- Date: Wed, 5 Jun 2024 09:32:36 GMT
- Title: Synergistic Integration of Coordinate Network and Tensorial Feature for Improving Neural Radiance Fields from Sparse Inputs
- Authors: Mingyu Kim, Jun-Seong Kim, Se-Young Yun, Jin-Hwa Kim,
- Abstract summary: We propose a method that integrates multi-plane representation with a coordinate-based network known for strong bias toward low-frequency signals.
We demonstrate that our proposed method outperforms baseline models for both static and dynamic NeRFs with sparse inputs.
- Score: 26.901819636977912
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The multi-plane representation has been highlighted for its fast training and inference across static and dynamic neural radiance fields. This approach constructs relevant features via projection onto learnable grids and interpolating adjacent vertices. However, it has limitations in capturing low-frequency details and tends to overuse parameters for low-frequency features due to its bias toward fine details, despite its multi-resolution concept. This phenomenon leads to instability and inefficiency when training poses are sparse. In this work, we propose a method that synergistically integrates multi-plane representation with a coordinate-based MLP network known for strong bias toward low-frequency signals. The coordinate-based network is responsible for capturing low-frequency details, while the multi-plane representation focuses on capturing fine-grained details. We demonstrate that using residual connections between them seamlessly preserves their own inherent properties. Additionally, the proposed progressive training scheme accelerates the disentanglement of these two features. We demonstrate empirically that our proposed method not only outperforms baseline models for both static and dynamic NeRFs with sparse inputs, but also achieves comparable results with fewer parameters.
Related papers
- FreSh: Frequency Shifting for Accelerated Neural Representation Learning [11.175745750843484]
Implicit Neural Representations (INRs) have recently gained attention as a powerful approach for continuously representing signals such as images, videos, and 3D shapes using multilayer perceptrons (MLPs)
Low-frequency details are known to exhibit a low-frequency bias, limiting their ability to capture high-frequency details accurately.
We propose frequency shifting (or FreSh) to align the frequency spectrum of the initial output with that of the target signal.
arXiv Detail & Related papers (2024-10-07T14:05:57Z) - Attention Beats Linear for Fast Implicit Neural Representation Generation [13.203243059083533]
We propose Attention-based Localized INR (ANR) composed of a localized attention layer (LAL) and a global representation vector.
With instance-specific representation and instance-agnostic ANR parameters, the target signals are well reconstructed as a continuous function.
arXiv Detail & Related papers (2024-07-22T03:52:18Z) - Coordinate-Aware Modulation for Neural Fields [11.844561374381575]
We propose a novel way for exploiting both synthesiss and grid representations in neural fields.
We suggest a Neural Coordinate-Aware Modulation (CAM), which modulates the parameters using scale and shift features extracted from the grid representations.
arXiv Detail & Related papers (2023-11-25T10:42:51Z) - ResFields: Residual Neural Fields for Spatiotemporal Signals [61.44420761752655]
ResFields is a novel class of networks specifically designed to effectively represent complex temporal signals.
We conduct comprehensive analysis of the properties of ResFields and propose a matrix factorization technique to reduce the number of trainable parameters.
We demonstrate the practical utility of ResFields by showcasing its effectiveness in capturing dynamic 3D scenes from sparse RGBD cameras.
arXiv Detail & Related papers (2023-09-06T16:59:36Z) - Tunable Convolutions with Parametric Multi-Loss Optimization [5.658123802733283]
Behavior of neural networks is irremediably determined by the specific loss and data used during training.
It is often desirable to tune the model at inference time based on external factors such as preferences of the user or dynamic characteristics of the data.
This is especially important to balance the perception-distortion trade-off of ill-posed image-to-image translation tasks.
arXiv Detail & Related papers (2023-04-03T11:36:10Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - Hierarchical Spherical CNNs with Lifting-based Adaptive Wavelets for
Pooling and Unpooling [101.72318949104627]
We propose a novel framework of hierarchical convolutional neural networks (HS-CNNs) with a lifting structure to learn adaptive spherical wavelets for pooling and unpooling.
LiftHS-CNN ensures a more efficient hierarchical feature learning for both image- and pixel-level tasks.
arXiv Detail & Related papers (2022-05-31T07:23:42Z) - Deep Impulse Responses: Estimating and Parameterizing Filters with Deep
Networks [76.830358429947]
Impulse response estimation in high noise and in-the-wild settings is a challenging problem.
We propose a novel framework for parameterizing and estimating impulse responses based on recent advances in neural representation learning.
arXiv Detail & Related papers (2022-02-07T18:57:23Z) - Meta-Learning Sparse Implicit Neural Representations [69.15490627853629]
Implicit neural representations are a promising new avenue of representing general signals.
Current approach is difficult to scale for a large number of signals or a data set.
We show that meta-learned sparse neural representations achieve a much smaller loss than dense meta-learned models.
arXiv Detail & Related papers (2021-10-27T18:02:53Z) - Rate Distortion Characteristic Modeling for Neural Image Compression [59.25700168404325]
End-to-end optimization capability offers neural image compression (NIC) superior lossy compression performance.
distinct models are required to be trained to reach different points in the rate-distortion (R-D) space.
We make efforts to formulate the essential mathematical functions to describe the R-D behavior of NIC using deep network and statistical modeling.
arXiv Detail & Related papers (2021-06-24T12:23:05Z) - Learning Graph-Convolutional Representations for Point Cloud Denoising [31.557988478764997]
We propose a deep neural network that can deal with the permutation-invariance problem encountered by learning-based point cloud processing methods.
The network is fully-convolutional and can build complex hierarchies of features by dynamically constructing neighborhood graphs.
It is especially robust both at high noise levels and in presence of structured noise such as the one encountered in real LiDAR scans.
arXiv Detail & Related papers (2020-07-06T08:11:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.