Inductive Gradient Adjustment For Spectral Bias In Implicit Neural Representations
- URL: http://arxiv.org/abs/2410.13271v1
- Date: Thu, 17 Oct 2024 06:51:10 GMT
- Title: Inductive Gradient Adjustment For Spectral Bias In Implicit Neural Representations
- Authors: Kexuan Shi, Hai Chen, Leheng Zhang, Shuhang Gu,
- Abstract summary: Implicit Neural Representations (INRs) have achieved success in various computer tasks.
Due to the spectral bias of the vanilla multi-layer perceptrons (MLPs), existing methods focus on designings with sophisticated architectures or repurposing training techniques for highly accurate INRs.
We propose a practical inductive gradient adjustment method, which could purposefully improve the spectral bias via inductive generalization of eNTK-based gradient transformation matrix.
- Score: 17.832898905413877
- License:
- Abstract: Implicit Neural Representations (INRs), as a versatile representation paradigm, have achieved success in various computer vision tasks. Due to the spectral bias of the vanilla multi-layer perceptrons (MLPs), existing methods focus on designing MLPs with sophisticated architectures or repurposing training techniques for highly accurate INRs. In this paper, we delve into the linear dynamics model of MLPs and theoretically identify the empirical Neural Tangent Kernel (eNTK) matrix as a reliable link between spectral bias and training dynamics. Based on eNTK matrix, we propose a practical inductive gradient adjustment method, which could purposefully improve the spectral bias via inductive generalization of eNTK-based gradient transformation matrix. We evaluate our method on different INRs tasks with various INR architectures and compare to existing training techniques. The superior representation performance clearly validates the advantage of our proposed method. Armed with our gradient adjustment method, better INRs with more enhanced texture details and sharpened edges can be learned from data by tailored improvements on spectral bias.
Related papers
- Approaching Deep Learning through the Spectral Dynamics of Weights [41.948042468042374]
spectral dynamics of weights -- the behavior of singular values and vectors during optimization -- to clarify and unify several phenomena in deep learning.
We identify a consistent bias in optimization across various experiments, from small-scale grokking'' to large-scale tasks like image classification with ConvNets, image generation with UNets, speech recognition with LSTMs, and language modeling with Transformers.
arXiv Detail & Related papers (2024-08-21T17:48:01Z) - Spectral Adapter: Fine-Tuning in Spectral Space [45.72323731094864]
We study the enhancement of current PEFT methods by incorporating the spectral information of pretrained weight matrices into the fine-tuning procedure.
We show through extensive experiments that the proposed fine-tuning model enables better parameter efficiency and tuning performance as well as benefits multi-adapter fusion.
arXiv Detail & Related papers (2024-05-22T19:36:55Z) - Improved Implicit Neural Representation with Fourier Reparameterized Training [21.93903328906775]
Implicit Neural Representation (INR) as a mighty representation paradigm has achieved success in various computer vision tasks recently.
Existing methods have investigated advanced techniques, such as positional encoding and periodic activation function, to improve the accuracy of INR.
arXiv Detail & Related papers (2024-01-15T00:40:41Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Supernova Light Curves Approximation based on Neural Network Models [53.180678723280145]
Photometric data-driven classification of supernovae becomes a challenge due to the appearance of real-time processing of big data in astronomy.
Recent studies have demonstrated the superior quality of solutions based on various machine learning models.
We study the application of multilayer perceptron (MLP), bayesian neural network (BNN), and normalizing flows (NF) to approximate observations for a single light curve.
arXiv Detail & Related papers (2022-06-27T13:46:51Z) - Accelerated MRI With Deep Linear Convolutional Transform Learning [7.927206441149002]
Recent studies show that deep learning based MRI reconstruction outperforms conventional methods in multiple applications.
In this work, we combine ideas from CS, TL and DL reconstructions to learn deep linear convolutional transforms.
Our results show that the proposed technique can reconstruct MR images to a level comparable to DL methods, while supporting uniform undersampling patterns.
arXiv Detail & Related papers (2022-04-17T04:47:32Z) - Tensor Normal Training for Deep Learning Models [10.175972095073282]
We propose and analyze a brand new approximate natural gradient method, Normal Training.
In our experiments, TNT exhibited superior optimization performance to first-order methods.
arXiv Detail & Related papers (2021-06-05T15:57:22Z) - Spectral Tensor Train Parameterization of Deep Learning Layers [136.4761580842396]
We study low-rank parameterizations of weight matrices with embedded spectral properties in the Deep Learning context.
We show the effects of neural network compression in the classification setting and both compression and improved stability training in the generative adversarial training setting.
arXiv Detail & Related papers (2021-03-07T00:15:44Z) - Reintroducing Straight-Through Estimators as Principled Methods for
Stochastic Binary Networks [85.94999581306827]
Training neural networks with binary weights and activations is a challenging problem due to the lack of gradients and difficulty of optimization over discrete weights.
Many successful experimental results have been achieved with empirical straight-through (ST) approaches.
At the same time, ST methods can be truly derived as estimators in the binary network (SBN) model with Bernoulli weights.
arXiv Detail & Related papers (2020-06-11T23:58:18Z) - Interpolation Technique to Speed Up Gradients Propagation in Neural ODEs [71.26657499537366]
We propose a simple literature-based method for the efficient approximation of gradients in neural ODE models.
We compare it with the reverse dynamic method to train neural ODEs on classification, density estimation, and inference approximation tasks.
arXiv Detail & Related papers (2020-03-11T13:15:57Z) - Kernel and Rich Regimes in Overparametrized Models [69.40899443842443]
We show that gradient descent on overparametrized multilayer networks can induce rich implicit biases that are not RKHS norms.
We also demonstrate this transition empirically for more complex matrix factorization models and multilayer non-linear networks.
arXiv Detail & Related papers (2020-02-20T15:43:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.