A Sampling Theory Perspective on Activations for Implicit Neural
Representations
- URL: http://arxiv.org/abs/2402.05427v1
- Date: Thu, 8 Feb 2024 05:52:45 GMT
- Title: A Sampling Theory Perspective on Activations for Implicit Neural
Representations
- Authors: Hemanth Saratchandran, Sameera Ramasinghe, Violetta Shevchenko,
Alexander Long, Simon Lucey
- Abstract summary: Implicit Neural Representations (INRs) have gained popularity for encoding signals as compact, differentiable entities.
We conduct a comprehensive analysis of these activations from a sampling theory perspective.
Our investigation reveals that sinc activations, previously unused in conjunction with INRs, are theoretically optimal for signal encoding.
- Score: 73.6637608397055
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit Neural Representations (INRs) have gained popularity for encoding
signals as compact, differentiable entities. While commonly using techniques
like Fourier positional encodings or non-traditional activation functions
(e.g., Gaussian, sinusoid, or wavelets) to capture high-frequency content,
their properties lack exploration within a unified theoretical framework.
Addressing this gap, we conduct a comprehensive analysis of these activations
from a sampling theory perspective. Our investigation reveals that sinc
activations, previously unused in conjunction with INRs, are theoretically
optimal for signal encoding. Additionally, we establish a connection between
dynamical systems and INRs, leveraging sampling theory to bridge these two
paradigms.
Related papers
- Exploring the Low-Pass Filtering Behavior in Image Super-Resolution [13.841859411005737]
In this paper, we attempt to interpret the behavior of deep neural networks in image super-resolution.
We propose a method named Hybrid Response Analysis (HyRA) to analyze the behavior of neural networks in ISR tasks.
Finally, to quantify the injected high-frequency information, we introduce a metric for image-to-image tasks called Frequency Spectrum Distribution Similarity (FSDS)
arXiv Detail & Related papers (2024-05-13T16:50:42Z) - Fourier-enhanced Implicit Neural Fusion Network for Multispectral and Hyperspectral Image Fusion [12.935592400092712]
Implicit neural representations (INR) have made significant strides in various vision-related domains.
INR is prone to losing high-frequency information and is confined to the lack of global perceptual capabilities.
This paper introduces a Fourier-enhanced Implicit Neural Fusion Network (FeINFN) specifically designed for MHIF task.
arXiv Detail & Related papers (2024-04-23T16:14:20Z) - Theoretical Bound-Guided Hierarchical VAE for Neural Image Codecs [11.729071258457138]
Recent studies reveal a significant theoretical link between variational autoencoders (VAEs) and rate-distortion theory.
VAEs estimate the theoretical upper bound of the information rate-distortion function of images.
To narrow this gap, we propose a theoretical bound-guided hierarchical VAE (BG-VAE) for neural image codecs.
arXiv Detail & Related papers (2024-03-27T13:11:34Z) - Towards Training Without Depth Limits: Batch Normalization Without
Gradient Explosion [83.90492831583997]
We show that a batch-normalized network can keep the optimal signal propagation properties, but avoid exploding gradients in depth.
We use a Multi-Layer Perceptron (MLP) with linear activations and batch-normalization that provably has bounded depth.
We also design an activation shaping scheme that empirically achieves the same properties for certain non-linear activations.
arXiv Detail & Related papers (2023-10-03T12:35:02Z) - Implicit Neural Representations and the Algebra of Complex Wavelets [36.311212480600794]
Implicit neural representations (INRs) have arisen as useful methods for representing signals on Euclidean domains.
By parameterizing an image as a multilayer perceptron (MLP) on Euclidean space, INRs effectively represent signals in a way that couples and spectral features of the signal that is not obvious in the usual discrete representation.
arXiv Detail & Related papers (2023-10-01T02:01:28Z) - Modality-Agnostic Variational Compression of Implicit Neural
Representations [96.35492043867104]
We introduce a modality-agnostic neural compression algorithm based on a functional view of data and parameterised as an Implicit Neural Representation (INR)
Bridging the gap between latent coding and sparsity, we obtain compact latent representations non-linearly mapped to a soft gating mechanism.
After obtaining a dataset of such latent representations, we directly optimise the rate/distortion trade-off in a modality-agnostic space using neural compression.
arXiv Detail & Related papers (2023-01-23T15:22:42Z) - Simple initialization and parametrization of sinusoidal networks via
their kernel bandwidth [92.25666446274188]
sinusoidal neural networks with activations have been proposed as an alternative to networks with traditional activation functions.
We first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis.
We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth.
arXiv Detail & Related papers (2022-11-26T07:41:48Z) - A Structured Dictionary Perspective on Implicit Neural Representations [47.35227614605095]
We show that most INR families are analogous to structured signal dictionaries whose atoms are integer harmonics of the set of initial mapping frequencies.
We explore the inductive bias of INRs exploiting recent results about the empirical neural tangent kernel (NTK)
Our results permit to design and tune novel INR architectures, but can also be of interest for the wider deep learning theory community.
arXiv Detail & Related papers (2021-12-03T14:00:52Z) - Credit Assignment in Neural Networks through Deep Feedback Control [59.14935871979047]
Deep Feedback Control (DFC) is a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment.
The resulting learning rule is fully local in space and time and approximates Gauss-Newton optimization for a wide range of connectivity patterns.
To further underline its biological plausibility, we relate DFC to a multi-compartment model of cortical pyramidal neurons with a local voltage-dependent synaptic plasticity rule, consistent with recent theories of dendritic processing.
arXiv Detail & Related papers (2021-06-15T05:30:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.