Nonparametric Teaching of Implicit Neural Representations
- URL: http://arxiv.org/abs/2405.10531v1
- Date: Fri, 17 May 2024 04:20:39 GMT
- Title: Nonparametric Teaching of Implicit Neural Representations
- Authors: Chen Zhang, Steven Tin Sui Luo, Jason Chun Lok Li, Yik-Chung Wu, Ngai Wong,
- Abstract summary: We show for the first time that an overparametricized multilayer perceptron is consistent with teaching a nonparametric learner.
This new discovery permits a convenient drop-in of nonparametric teaching algorithms to broadly enhance INR training efficiency, demonstrating 30%+ training time savings across various input modalities.
- Score: 21.313485818701434
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We investigate the learning of implicit neural representation (INR) using an overparameterized multilayer perceptron (MLP) via a novel nonparametric teaching perspective. The latter offers an efficient example selection framework for teaching nonparametrically defined (viz. non-closed-form) target functions, such as image functions defined by 2D grids of pixels. To address the costly training of INRs, we propose a paradigm called Implicit Neural Teaching (INT) that treats INR learning as a nonparametric teaching problem, where the given signal being fitted serves as the target function. The teacher then selects signal fragments for iterative training of the MLP to achieve fast convergence. By establishing a connection between MLP evolution through parameter-based gradient descent and that of function evolution through functional gradient descent in nonparametric teaching, we show for the first time that teaching an overparameterized MLP is consistent with teaching a nonparametric learner. This new discovery readily permits a convenient drop-in of nonparametric teaching algorithms to broadly enhance INR training efficiency, demonstrating 30%+ training time savings across various input modalities.
Related papers
- Meta-INR: Efficient Encoding of Volumetric Data via Meta-Learning Implicit Neural Representation [4.782024723712711]
Implicit neural representation (INR) has emerged as a promising solution for encoding volumetric data.
We propose Meta-INR, a pretraining strategy adapted from meta-learning algorithms to learn initial INR parameters from partial observation of a dataset.
We demonstrate that Meta-INR can effectively extract high-quality generalizable features that help encode unseen similar volume data across diverse datasets.
arXiv Detail & Related papers (2025-02-12T21:54:22Z) - ALoRE: Efficient Visual Adaptation via Aggregating Low Rank Experts [71.91042186338163]
ALoRE is a novel PETL method that reuses the hypercomplex parameterized space constructed by Kronecker product to Aggregate Low Rank Experts.
Thanks to the artful design, ALoRE maintains negligible extra parameters and can be effortlessly merged into the frozen backbone.
arXiv Detail & Related papers (2024-12-11T12:31:30Z) - Attention Beats Linear for Fast Implicit Neural Representation Generation [13.203243059083533]
We propose Attention-based Localized INR (ANR) composed of a localized attention layer (LAL) and a global representation vector.
With instance-specific representation and instance-agnostic ANR parameters, the target signals are well reconstructed as a continuous function.
arXiv Detail & Related papers (2024-07-22T03:52:18Z) - Improved Implicit Neural Representation with Fourier Reparameterized Training [21.93903328906775]
Implicit Neural Representation (INR) as a mighty representation paradigm has achieved success in various computer vision tasks recently.
Existing methods have investigated advanced techniques, such as positional encoding and periodic activation function, to improve the accuracy of INR.
arXiv Detail & Related papers (2024-01-15T00:40:41Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - Versatile Neural Processes for Learning Implicit Neural Representations [57.090658265140384]
We propose Versatile Neural Processes (VNP), which largely increases the capability of approximating functions.
Specifically, we introduce a bottleneck encoder that produces fewer and informative context tokens, relieving the high computational cost.
We demonstrate the effectiveness of the proposed VNP on a variety of tasks involving 1D, 2D and 3D signals.
arXiv Detail & Related papers (2023-01-21T04:08:46Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Meta-Learning Sparse Implicit Neural Representations [69.15490627853629]
Implicit neural representations are a promising new avenue of representing general signals.
Current approach is difficult to scale for a large number of signals or a data set.
We show that meta-learned sparse neural representations achieve a much smaller loss than dense meta-learned models.
arXiv Detail & Related papers (2021-10-27T18:02:53Z) - Gradient-Free Neural Network Training via Synaptic-Level Reinforcement
Learning [0.0]
It is widely believed that there is a consistent, synaptic-level learning mechanism in specific brain regions that actualizes learning.
Here we propose an algorithm based on reinforcement learning to generate and apply a simple synaptic-level learning policy.
The robustness and lack of reliance on gradient opens the door for new techniques for training difficult-to-differentiate neural networks.
arXiv Detail & Related papers (2021-05-29T22:26:18Z) - Adaptive Gradient Method with Resilience and Momentum [120.83046824742455]
We propose an Adaptive Gradient Method with Resilience and Momentum (AdaRem)
AdaRem adjusts the parameter-wise learning rate according to whether the direction of one parameter changes in the past is aligned with the direction of the current gradient.
Our method outperforms previous adaptive learning rate-based algorithms in terms of the training speed and the test error.
arXiv Detail & Related papers (2020-10-21T14:49:00Z) - A Novel Neural Network Training Framework with Data Assimilation [2.948167339160823]
A gradient-free training framework based on data assimilation is proposed to avoid the calculation of gradients.
The results show that the proposed training framework performed better than the gradient decent method.
arXiv Detail & Related papers (2020-10-06T11:12:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.