NeRD: Neural field-based Demosaicking
- URL: http://arxiv.org/abs/2304.06566v1
- Date: Thu, 13 Apr 2023 14:25:05 GMT
- Title: NeRD: Neural field-based Demosaicking
- Authors: Tomas Kerepecky, Filip Sroubek, Adam Novozamsky, Jan Flusser
- Abstract summary: NeRD is a new demosaicking method for generating full-color images from Bayer patterns.
We leverage advancements in neural fields to perform demosaicking by representing an image as a coordinate-based neural network with sine activation functions.
- Score: 10.791425064370511
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce NeRD, a new demosaicking method for generating full-color images
from Bayer patterns. Our approach leverages advancements in neural fields to
perform demosaicking by representing an image as a coordinate-based neural
network with sine activation functions. The inputs to the network are spatial
coordinates and a low-resolution Bayer pattern, while the outputs are the
corresponding RGB values. An encoder network, which is a blend of ResNet and
U-net, enhances the implicit neural representation of the image to improve its
quality and ensure spatial consistency through prior learning. Our experimental
results demonstrate that NeRD outperforms traditional and state-of-the-art
CNN-based methods and significantly closes the gap to transformer-based
methods.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - Unsupervised Domain Transfer with Conditional Invertible Neural Networks [83.90291882730925]
We propose a domain transfer approach based on conditional invertible neural networks (cINNs)
Our method inherently guarantees cycle consistency through its invertible architecture, and network training can efficiently be conducted with maximum likelihood.
Our method enables the generation of realistic spectral data and outperforms the state of the art on two downstream classification tasks.
arXiv Detail & Related papers (2023-03-17T18:00:27Z) - NeRN -- Learning Neural Representations for Neural Networks [3.7384109981836153]
We show that, when adapted correctly, neural representations can be used to represent the weights of a pre-trained convolutional neural network.
Inspired by coordinate inputs of previous neural representation methods, we assign a coordinate to each convolutional kernel in our network.
We present two applications using NeRN, demonstrating the capabilities of the learned representations.
arXiv Detail & Related papers (2022-12-27T17:14:44Z) - JSRNN: Joint Sampling and Reconstruction Neural Networks for High
Quality Image Compressed Sensing [8.902545322578925]
Two sub-networks, which are the sampling sub-network and the reconstruction sub-network, are included in the proposed framework.
In the reconstruction sub-network, a cascade network combining stacked denoising autoencoder (SDA) and convolutional neural network (CNN) is designed to reconstruct signals.
This framework outperforms many other state-of-the-art methods, especially at low sampling rates.
arXiv Detail & Related papers (2022-11-11T02:20:30Z) - Signal Processing for Implicit Neural Representations [80.38097216996164]
Implicit Neural Representations (INRs) encode continuous multi-media data via multi-layer perceptrons.
Existing works manipulate such continuous representations via processing on their discretized instance.
We propose an implicit neural signal processing network, dubbed INSP-Net, via differential operators on INR.
arXiv Detail & Related papers (2022-10-17T06:29:07Z) - Sobolev Training for Implicit Neural Representations with Approximated
Image Derivatives [12.71676484494428]
Implicit Neural Representations (INRs) parameterized by neural networks have emerged as a powerful tool to represent different kinds of signals.
We propose a training paradigm for INRs whose target output is image pixels, to encode image derivatives in addition to image values in the neural network.
We show how the training paradigm can be leveraged to solve typical INRs problems, i.e., image regression and inverse rendering.
arXiv Detail & Related papers (2022-07-21T10:12:41Z) - Understanding the Influence of Receptive Field and Network Complexity in
Neural-Network-Guided TEM Image Analysis [0.0]
We systematically examine how neural network architecture choices affect how neural networks segment in transmission electron microscopy (TEM) images.
We find that for low-resolution TEM images which rely on amplitude contrast to distinguish nanoparticles from background, the receptive field does not significantly influence segmentation performance.
On the other hand, for high-resolution TEM images which rely on a combination of amplitude and phase contrast changes to identify nanoparticles, receptive field is a key parameter for increased performance.
arXiv Detail & Related papers (2022-04-08T18:45:15Z) - Spatial Dependency Networks: Neural Layers for Improved Generative Image
Modeling [79.15521784128102]
We introduce a novel neural network for building image generators (decoders) and apply it to variational autoencoders (VAEs)
In our spatial dependency networks (SDNs), feature maps at each level of a deep neural net are computed in a spatially coherent way.
We show that augmenting the decoder of a hierarchical VAE by spatial dependency layers considerably improves density estimation.
arXiv Detail & Related papers (2021-03-16T07:01:08Z) - NAS-DIP: Learning Deep Image Prior with Neural Architecture Search [65.79109790446257]
Recent work has shown that the structure of deep convolutional neural networks can be used as a structured image prior.
We propose to search for neural architectures that capture stronger image priors.
We search for an improved network by leveraging an existing neural architecture search algorithm.
arXiv Detail & Related papers (2020-08-26T17:59:36Z) - ODE-based Deep Network for MRI Reconstruction [1.569044447685249]
We propose an ODE-based deep network for MRI reconstruction to enable the rapid acquisition of MR images with improved image quality.
Our results with undersampled data demonstrate that our method can deliver higher quality images in comparison to the reconstruction methods based on the standard UNet network and Residual network.
arXiv Detail & Related papers (2019-12-27T20:13:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.