Streamable Neural Fields
- URL: http://arxiv.org/abs/2207.09663v1
- Date: Wed, 20 Jul 2022 05:42:02 GMT
- Title: Streamable Neural Fields
- Authors: Junwoo Cho, Seungtae Nam, Daniel Rho, Jong Hwan Ko, Eunbyung Park
- Abstract summary: We propose streamable neural fields, a single model that consists of executable sub-networks of various widths.
The proposed architectural and training techniques enable a single network to be streamable over time and reconstruct different qualities and parts of signals.
Experimental results have shown the effectiveness of our method in various domains, such as 2D images, videos, and 3D signed distance functions.
- Score: 5.404549859703572
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural fields have emerged as a new data representation paradigm and have
shown remarkable success in various signal representations. Since they preserve
signals in their network parameters, the data transfer by sending and receiving
the entire model parameters prevents this emerging technology from being used
in many practical scenarios. We propose streamable neural fields, a single
model that consists of executable sub-networks of various widths. The proposed
architectural and training techniques enable a single network to be streamable
over time and reconstruct different qualities and parts of signals. For
example, a smaller sub-network produces smooth and low-frequency signals, while
a larger sub-network can represent fine details. Experimental results have
shown the effectiveness of our method in various domains, such as 2D images,
videos, and 3D signed distance functions. Finally, we demonstrate that our
proposed method improves training stability, by exploiting parameter sharing.
Related papers
- Neural Network Parameter Diffusion [50.85251415173792]
Diffusion models have achieved remarkable success in image and video generation.
In this work, we demonstrate that diffusion models can also.
generate high-performing neural network parameters.
arXiv Detail & Related papers (2024-02-20T16:59:03Z) - Federated Multi-View Synthesizing for Metaverse [52.59476179535153]
The metaverse is expected to provide immersive entertainment, education, and business applications.
Virtual reality (VR) transmission over wireless networks is data- and computation-intensive.
We have developed a novel multi-view synthesizing framework that can efficiently provide synthesizing, storage, and communication resources for wireless content delivery in the metaverse.
arXiv Detail & Related papers (2023-12-18T13:51:56Z) - Physical-Layer Semantic-Aware Network for Zero-Shot Wireless Sensing [74.12670841657038]
Device-free wireless sensing has recently attracted significant interest due to its potential to support a wide range of immersive human-machine interactive applications.
Data heterogeneity in wireless signals and data privacy regulation of distributed sensing have been considered as the major challenges that hinder the wide applications of wireless sensing in large area networking systems.
We propose a novel zero-shot wireless sensing solution that allows models constructed in one or a limited number of locations to be directly transferred to other locations without any labeled data.
arXiv Detail & Related papers (2023-12-08T13:50:30Z) - ResFields: Residual Neural Fields for Spatiotemporal Signals [61.44420761752655]
ResFields is a novel class of networks specifically designed to effectively represent complex temporal signals.
We conduct comprehensive analysis of the properties of ResFields and propose a matrix factorization technique to reduce the number of trainable parameters.
We demonstrate the practical utility of ResFields by showcasing its effectiveness in capturing dynamic 3D scenes from sparse RGBD cameras.
arXiv Detail & Related papers (2023-09-06T16:59:36Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Meta-Learning Sparse Implicit Neural Representations [69.15490627853629]
Implicit neural representations are a promising new avenue of representing general signals.
Current approach is difficult to scale for a large number of signals or a data set.
We show that meta-learned sparse neural representations achieve a much smaller loss than dense meta-learned models.
arXiv Detail & Related papers (2021-10-27T18:02:53Z) - ACORN: Adaptive Coordinate Networks for Neural Scene Representation [40.04760307540698]
Current neural representations fail to accurately represent images at resolutions greater than a megapixel or 3D scenes with more than a few hundred thousand polygons.
We introduce a new hybrid implicit-explicit network architecture and training strategy that adaptively allocates resources during training and inference.
We demonstrate the first experiments that fit gigapixel images to nearly 40 dB peak signal-to-noise ratio.
arXiv Detail & Related papers (2021-05-06T16:21:38Z) - Deep Multimodal Transfer-Learned Regression in Data-Poor Domains [0.0]
We propose a Deep Multimodal Transfer-Learned Regressor (DMTL-R) for multimodal learning of image and feature data.
Our model is capable of fine-tuning a given set of pre-trained CNN weights on a small amount of training image data.
We present results using phase-field simulation microstructure images with an accompanying set of physical features, using pre-trained weights from various well-known CNN architectures.
arXiv Detail & Related papers (2020-06-16T16:52:44Z) - Multiresolution Convolutional Autoencoders [5.0169726108025445]
We propose a multi-resolution convolutional autoencoder architecture that integrates and leverages three successful mathematical architectures.
Basic learning techniques are applied to ensure information learned from previous training steps can be rapidly transferred to the larger network.
The performance gains are illustrated through a sequence of numerical experiments on synthetic examples and real-world spatial data.
arXiv Detail & Related papers (2020-04-10T08:31:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.