Regularize implicit neural representation by itself
- URL: http://arxiv.org/abs/2303.15484v1
- Date: Mon, 27 Mar 2023 04:11:08 GMT
- Title: Regularize implicit neural representation by itself
- Authors: Zhemin Li, Hongxia Wang, Deyu Meng
- Abstract summary: This paper proposes a regularizer called Implicit Neural Representation Regularizer (INRR) to improve the generalization ability of the Implicit Neural Representation (INR)
The proposed INRR is based on learned Dirichlet Energy (DE) that measures similarities between rows/columns of the matrix.
The paper also reveals a series of properties derived from INRR, including momentum methods like convergence trajectory and multi-scale similarity.
- Score: 48.194276790352006
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes a regularizer called Implicit Neural Representation
Regularizer (INRR) to improve the generalization ability of the Implicit Neural
Representation (INR). The INR is a fully connected network that can represent
signals with details not restricted by grid resolution. However, its
generalization ability could be improved, especially with non-uniformly sampled
data. The proposed INRR is based on learned Dirichlet Energy (DE) that measures
similarities between rows/columns of the matrix. The smoothness of the
Laplacian matrix is further integrated by parameterizing DE with a tiny INR.
INRR improves the generalization of INR in signal representation by perfectly
integrating the signal's self-similarity with the smoothness of the Laplacian
matrix. Through well-designed numerical experiments, the paper also reveals a
series of properties derived from INRR, including momentum methods like
convergence trajectory and multi-scale similarity. Moreover, the proposed
method could improve the performance of other signal representation methods.
Related papers
- ISR: Invertible Symbolic Regression [7.499800486499609]
Invertible Symbolic Regression is a machine learning technique that generates analytical relationships between inputs and outputs of a given dataset via invertible maps.
We transform the affine coupling blocks of INNs into a symbolic framework, resulting in an end-to-end differentiable symbolic invertible architecture.
We show that ISR can serve as a (symbolic) normalizing flow for density estimation tasks.
arXiv Detail & Related papers (2024-05-10T23:20:46Z) - Disorder-invariant Implicit Neural Representation [32.510321385245774]
Implicit neural representation (INR) characterizes the attributes of a signal as a function of corresponding coordinates.
We propose the disorder-invariant implicit neural representation (DINER) by augmenting a hash-table to a traditional INR backbone.
arXiv Detail & Related papers (2023-04-03T09:28:48Z) - DINER: Disorder-Invariant Implicit Neural Representation [33.10256713209207]
Implicit neural representation (INR) characterizes the attributes of a signal as a function of corresponding coordinates.
We propose the disorder-invariant implicit neural representation (DINER) by augmenting a hash-table to a traditional INR backbone.
arXiv Detail & Related papers (2022-11-15T03:34:24Z) - Signal Processing for Implicit Neural Representations [80.38097216996164]
Implicit Neural Representations (INRs) encode continuous multi-media data via multi-layer perceptrons.
Existing works manipulate such continuous representations via processing on their discretized instance.
We propose an implicit neural signal processing network, dubbed INSP-Net, via differential operators on INR.
arXiv Detail & Related papers (2022-10-17T06:29:07Z) - Transformers as Meta-Learners for Implicit Neural Representations [10.673855995948736]
Implicit Neural Representations (INRs) have emerged and shown their benefits over discrete representations in recent years.
We propose a formulation that uses Transformers as hypernetworks for INRs, where it can directly build the whole set of INR weights.
We demonstrate the effectiveness of our method for building INRs in different tasks and domains, including 2D image regression and view synthesis for 3D objects.
arXiv Detail & Related papers (2022-08-04T17:54:38Z) - Multi-Head ReLU Implicit Neural Representation Networks [3.04585143845864]
A novel multi-head multi-layer perceptron (MLP) structure is presented for implicit neural representation (INR)
We show that the proposed model does not suffer from the special bias of conventional ReLU networks and has superior capabilities.
arXiv Detail & Related papers (2021-10-07T13:27:35Z) - Joint Network Topology Inference via Structured Fusion Regularization [70.30364652829164]
Joint network topology inference represents a canonical problem of learning multiple graph Laplacian matrices from heterogeneous graph signals.
We propose a general graph estimator based on a novel structured fusion regularization.
We show that the proposed graph estimator enjoys both high computational efficiency and rigorous theoretical guarantee.
arXiv Detail & Related papers (2021-03-05T04:42:32Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Controllable Orthogonalization in Training DNNs [96.1365404059924]
Orthogonality is widely used for training deep neural networks (DNNs) due to its ability to maintain all singular values of the Jacobian close to 1.
This paper proposes a computationally efficient and numerically stable orthogonalization method using Newton's iteration (ONI)
We show that our method improves the performance of image classification networks by effectively controlling the orthogonality to provide an optimal tradeoff between optimization benefits and representational capacity reduction.
We also show that ONI stabilizes the training of generative adversarial networks (GANs) by maintaining the Lipschitz continuity of a network, similar to spectral normalization (
arXiv Detail & Related papers (2020-04-02T10:14:27Z) - Supervised Learning for Non-Sequential Data: A Canonical Polyadic
Decomposition Approach [85.12934750565971]
Efficient modelling of feature interactions underpins supervised learning for non-sequential tasks.
To alleviate this issue, it has been proposed to implicitly represent the model parameters as a tensor.
For enhanced expressiveness, we generalize the framework to allow feature mapping to arbitrarily high-dimensional feature vectors.
arXiv Detail & Related papers (2020-01-27T22:38:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.