Multi-Head ReLU Implicit Neural Representation Networks
- URL: http://arxiv.org/abs/2110.03448v1
- Date: Thu, 7 Oct 2021 13:27:35 GMT
- Title: Multi-Head ReLU Implicit Neural Representation Networks
- Authors: Arya Aftab, Alireza Morsali
- Abstract summary: A novel multi-head multi-layer perceptron (MLP) structure is presented for implicit neural representation (INR)
We show that the proposed model does not suffer from the special bias of conventional ReLU networks and has superior capabilities.
- Score: 3.04585143845864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, a novel multi-head multi-layer perceptron (MLP) structure is
presented for implicit neural representation (INR). Since conventional
rectified linear unit (ReLU) networks are shown to exhibit spectral bias
towards learning low-frequency features of the signal, we aim at mitigating
this defect by taking advantage of the local structure of the signals. To be
more specific, an MLP is used to capture the global features of the underlying
generator function of the desired signal. Then, several heads are utilized to
reconstruct disjoint local features of the signal, and to reduce the
computational complexity, sparse layers are deployed for attaching heads to the
body. Through various experiments, we show that the proposed model does not
suffer from the special bias of conventional ReLU networks and has superior
generalization capabilities. Finally, simulation results confirm that the
proposed multi-head structure outperforms existing INR methods with
considerably less computational cost.
Related papers
- Single-Layer Learnable Activation for Implicit Neural Representation (SL$^{2}$A-INR) [6.572456394600755]
Implicit Representation (INR) leveraging a neural network to transform coordinate input into corresponding attributes has driven significant advances in vision-related domains.
We propose SL$2$A-INR with a single-layer learnable activation function, prompting the effectiveness of traditional ReLU-baseds.
Our method performs superior across diverse tasks, including image representation, 3D shape reconstruction, single image super-resolution, CT reconstruction, and novel view.
arXiv Detail & Related papers (2024-09-17T02:02:15Z) - Implicit Neural Representations with Fourier Kolmogorov-Arnold Networks [4.499833362998488]
Implicit neural representations (INRs) use neural networks to provide continuous and resolution-independent representations of complex signals.
The proposed FKAN utilizes learnable activation functions modeled as Fourier series in the first layer to effectively control and learn the task-specific frequency components.
Experimental results show that our proposed FKAN model outperforms three state-of-the-art baseline schemes.
arXiv Detail & Related papers (2024-09-14T05:53:33Z) - Implicit Neural Representations and the Algebra of Complex Wavelets [36.311212480600794]
Implicit neural representations (INRs) have arisen as useful methods for representing signals on Euclidean domains.
By parameterizing an image as a multilayer perceptron (MLP) on Euclidean space, INRs effectively represent signals in a way that couples and spectral features of the signal that is not obvious in the usual discrete representation.
arXiv Detail & Related papers (2023-10-01T02:01:28Z) - Generalized Activation via Multivariate Projection [46.837481855573145]
Activation functions are essential to introduce nonlinearity into neural networks.
We consider ReLU as a projection from R onto the nonnegative half-line R+.
We extend ReLU by substituting it with a generalized projection operator onto a convex cone, such as the Second-Order Cone (SOC) projection.
arXiv Detail & Related papers (2023-09-29T12:44:27Z) - Regularize implicit neural representation by itself [48.194276790352006]
This paper proposes a regularizer called Implicit Neural Representation Regularizer (INRR) to improve the generalization ability of the Implicit Neural Representation (INR)
The proposed INRR is based on learned Dirichlet Energy (DE) that measures similarities between rows/columns of the matrix.
The paper also reveals a series of properties derived from INRR, including momentum methods like convergence trajectory and multi-scale similarity.
arXiv Detail & Related papers (2023-03-27T04:11:08Z) - DINER: Disorder-Invariant Implicit Neural Representation [33.10256713209207]
Implicit neural representation (INR) characterizes the attributes of a signal as a function of corresponding coordinates.
We propose the disorder-invariant implicit neural representation (DINER) by augmenting a hash-table to a traditional INR backbone.
arXiv Detail & Related papers (2022-11-15T03:34:24Z) - Learning an Invertible Output Mapping Can Mitigate Simplicity Bias in
Neural Networks [66.76034024335833]
We investigate why diverse/ complex features are learned by the backbone, and their brittleness is due to the linear classification head relying primarily on the simplest features.
We propose Feature Reconstruction Regularizer (FRR) to ensure that the learned features can be reconstructed back from the logits.
We demonstrate up to 15% gains in OOD accuracy on the recently introduced semi-synthetic datasets with extreme distribution shifts.
arXiv Detail & Related papers (2022-10-04T04:01:15Z) - MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface
Reconstruction [72.05649682685197]
State-of-the-art neural implicit methods allow for high-quality reconstructions of simple scenes from many input views.
This is caused primarily by the inherent ambiguity in the RGB reconstruction loss that does not provide enough constraints.
Motivated by recent advances in the area of monocular geometry prediction, we explore the utility these cues provide for improving neural implicit surface reconstruction.
arXiv Detail & Related papers (2022-06-01T17:58:15Z) - Over-and-Under Complete Convolutional RNN for MRI Reconstruction [57.95363471940937]
Recent deep learning-based methods for MR image reconstruction usually leverage a generic auto-encoder architecture.
We propose an Over-and-Under Complete Convolu?tional Recurrent Neural Network (OUCR), which consists of an overcomplete and an undercomplete Convolutional Recurrent Neural Network(CRNN)
The proposed method achieves significant improvements over the compressed sensing and popular deep learning-based methods with less number of trainable parameters.
arXiv Detail & Related papers (2021-06-16T15:56:34Z) - Iterative Network for Image Super-Resolution [69.07361550998318]
Single image super-resolution (SISR) has been greatly revitalized by the recent development of convolutional neural networks (CNN)
This paper provides a new insight on conventional SISR algorithm, and proposes a substantially different approach relying on the iterative optimization.
A novel iterative super-resolution network (ISRN) is proposed on top of the iterative optimization.
arXiv Detail & Related papers (2020-05-20T11:11:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.