WIRE: Wavelet Implicit Neural Representations
- URL: http://arxiv.org/abs/2301.05187v1
- Date: Thu, 5 Jan 2023 20:24:56 GMT
- Title: WIRE: Wavelet Implicit Neural Representations
- Authors: Vishwanath Saragadam, Daniel LeJeune, Jasper Tan, Guha Balakrishnan,
Ashok Veeraraghavan, Richard G. Baraniuk
- Abstract summary: Implicit neural representations (INRs) have recently advanced numerous vision-related areas.
Current INRs designed to have high accuracy also suffer from poor robustness.
We develop a new, highly accurate and robust INR that does not exhibit this tradeoff.
- Score: 42.147899723673596
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Implicit neural representations (INRs) have recently advanced numerous
vision-related areas. INR performance depends strongly on the choice of the
nonlinear activation function employed in its multilayer perceptron (MLP)
network. A wide range of nonlinearities have been explored, but, unfortunately,
current INRs designed to have high accuracy also suffer from poor robustness
(to signal noise, parameter variation, etc.). Inspired by harmonic analysis, we
develop a new, highly accurate and robust INR that does not exhibit this
tradeoff. Wavelet Implicit neural REpresentation (WIRE) uses a continuous
complex Gabor wavelet activation function that is well-known to be optimally
concentrated in space-frequency and to have excellent biases for representing
images. A wide range of experiments (image denoising, image inpainting,
super-resolution, computed tomography reconstruction, image overfitting, and
novel view synthesis with neural radiance fields) demonstrate that WIRE defines
the new state of the art in INR accuracy, training time, and robustness.
Related papers
- Towards a Sampling Theory for Implicit Neural Representations [0.3222802562733786]
Implicit neural representations (INRs) have emerged as a powerful tool for solving inverse problems in computer and computational imaging.
We show how to recover images from a hidden-layer INR using a generalized form of weight decay regularization.
We empirically assess the probability of achieving exact recovery images realized by low-width single-layer INRs, and illustrate the performance of INR on super-resolution recovery of more realistic continuous domain phantom images.
arXiv Detail & Related papers (2024-05-28T17:53:47Z) - Spatiotemporal implicit neural representation for unsupervised dynamic
MRI reconstruction [11.661657147506519]
Implicit Neuraltruth (INR) has appeared as powerful DL-based tool for solving the inverse problem.
In this work, we proposed an INR-based method to improve dynamic MRI reconstruction from highly undersampled k-space data.
The proposed INR represents the dynamic MRI images as an implicit function and encodes them into neural networks.
arXiv Detail & Related papers (2022-12-31T05:43:21Z) - A scan-specific unsupervised method for parallel MRI reconstruction via
implicit neural representation [9.388253054229155]
implicit neural representation (INR) has emerged as a new deep learning paradigm for learning the internal continuity of an object.
The proposed method outperforms existing methods by suppressing the aliasing artifacts and noise.
The high-quality results and scanning specificity make the proposed method hold the potential for further accelerating the data acquisition of parallel MRI.
arXiv Detail & Related papers (2022-10-19T10:16:03Z) - SiNeRF: Sinusoidal Neural Radiance Fields for Joint Pose Estimation and
Scene Reconstruction [147.9379707578091]
NeRFmm is the Neural Radiance Fields (NeRF) that deal with Joint Optimization tasks.
Despite NeRFmm producing precise scene synthesis and pose estimations, it still struggles to outperform the full-annotated baseline on challenging scenes.
We propose Sinusoidal Neural Radiance Fields (SiNeRF) that leverage sinusoidal activations for radiance mapping and a novel Mixed Region Sampling (MRS) for selecting ray batch efficiently.
arXiv Detail & Related papers (2022-10-10T10:47:51Z) - The Spectral Bias of Polynomial Neural Networks [63.27903166253743]
Polynomial neural networks (PNNs) have been shown to be particularly effective at image generation and face recognition, where high-frequency information is critical.
Previous studies have revealed that neural networks demonstrate a $textitspectral bias$ towards low-frequency functions, which yields faster learning of low-frequency components during training.
Inspired by such studies, we conduct a spectral analysis of the Tangent Kernel (NTK) of PNNs.
We find that the $Pi$-Net family, i.e., a recently proposed parametrization of PNNs, speeds up the
arXiv Detail & Related papers (2022-02-27T23:12:43Z) - Deep Impulse Responses: Estimating and Parameterizing Filters with Deep
Networks [76.830358429947]
Impulse response estimation in high noise and in-the-wild settings is a challenging problem.
We propose a novel framework for parameterizing and estimating impulse responses based on recent advances in neural representation learning.
arXiv Detail & Related papers (2022-02-07T18:57:23Z) - Exploring Inter-frequency Guidance of Image for Lightweight Gaussian
Denoising [1.52292571922932]
We propose a novel network architecture denoted as IGNet, in order to refine the frequency bands from low to high in a progressive manner.
With this design, more inter-frequency prior and information are utilized, thus the model size can be lightened while still perserves competitive results.
arXiv Detail & Related papers (2021-12-22T10:35:53Z) - NeRF-SR: High-Quality Neural Radiance Fields using Super-Sampling [82.99453001445478]
We present NeRF-SR, a solution for high-resolution (HR) novel view synthesis with mostly low-resolution (LR) inputs.
Our method is built upon Neural Radiance Fields (NeRF) that predicts per-point density and color with a multi-layer perceptron.
arXiv Detail & Related papers (2021-12-03T07:33:47Z) - Towards Lightweight Controllable Audio Synthesis with Conditional
Implicit Neural Representations [10.484851004093919]
Implicit neural representations (INRs) are neural networks used to approximate low-dimensional functions.
In this work we shed light on the potential of Conditional Implicit Neural Representations (CINRs) as lightweight backbones in generative frameworks for audio synthesis.
arXiv Detail & Related papers (2021-11-14T13:36:18Z) - Deep Networks for Direction-of-Arrival Estimation in Low SNR [89.45026632977456]
We introduce a Convolutional Neural Network (CNN) that is trained from mutli-channel data of the true array manifold matrix.
We train a CNN in the low-SNR regime to predict DoAs across all SNRs.
Our robust solution can be applied in several fields, ranging from wireless array sensors to acoustic microphones or sonars.
arXiv Detail & Related papers (2020-11-17T12:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.