Improved Implicit Neural Representation with Fourier Reparameterized Training
- URL: http://arxiv.org/abs/2401.07402v4
- Date: Thu, 4 Jul 2024 11:24:03 GMT
- Title: Improved Implicit Neural Representation with Fourier Reparameterized Training
- Authors: Kexuan Shi, Xingyu Zhou, Shuhang Gu,
- Abstract summary: Implicit Neural Representation (INR) as a mighty representation paradigm has achieved success in various computer vision tasks recently.
Existing methods have investigated advanced techniques, such as positional encoding and periodic activation function, to improve the accuracy of INR.
- Score: 21.93903328906775
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit Neural Representation (INR) as a mighty representation paradigm has achieved success in various computer vision tasks recently. Due to the low-frequency bias issue of vanilla multi-layer perceptron (MLP), existing methods have investigated advanced techniques, such as positional encoding and periodic activation function, to improve the accuracy of INR. In this paper, we connect the network training bias with the reparameterization technique and theoretically prove that weight reparameterization could provide us a chance to alleviate the spectral bias of MLP. Based on our theoretical analysis, we propose a Fourier reparameterization method which learns coefficient matrix of fixed Fourier bases to compose the weights of MLP. We evaluate the proposed Fourier reparameterization method on different INR tasks with various MLP architectures, including vanilla MLP, MLP with positional encoding and MLP with advanced activation function, etc. The superiority approximation results on different MLP architectures clearly validate the advantage of our proposed method. Armed with our Fourier reparameterization method, better INR with more textures and less artifacts can be learned from the training data.
Related papers
- Inductive Gradient Adjustment For Spectral Bias In Implicit Neural Representations [17.832898905413877]
Implicit Neural Representations (INRs) have achieved success in various computer tasks.
Due to the spectral bias of the vanilla multi-layer perceptrons (MLPs), existing methods focus on designings with sophisticated architectures or repurposing training techniques for highly accurate INRs.
We propose a practical inductive gradient adjustment method, which could purposefully improve the spectral bias via inductive generalization of eNTK-based gradient transformation matrix.
arXiv Detail & Related papers (2024-10-17T06:51:10Z) - Leveraging FourierKAN Classification Head for Pre-Trained Transformer-based Text Classification [0.51795041186793]
We introduce FR-KAN, a variant of the promising alternative called Kolmogorov-Arnold Networks (KANs) as classification heads for transformer-based encoders.
Our studies reveal an average increase of 10% in accuracy and 11% in F1-score when incorporating traditional heads instead of transformer-based pre-trained models.
arXiv Detail & Related papers (2024-08-16T15:28:02Z) - Nonparametric Teaching of Implicit Neural Representations [21.313485818701434]
We show for the first time that an overparametricized multilayer perceptron is consistent with teaching a nonparametric learner.
This new discovery permits a convenient drop-in of nonparametric teaching algorithms to broadly enhance INR training efficiency, demonstrating 30%+ training time savings across various input modalities.
arXiv Detail & Related papers (2024-05-17T04:20:39Z) - On Optimal Sampling for Learning SDF Using MLPs Equipped with Positional
Encoding [79.67071790034609]
We devise a tool to determine the appropriate sampling rate for learning an accurate neural implicit field without undesirable side effects.
It is observed that a PE-equipped has an intrinsic frequency much higher than the highest frequency component in the PE layer.
We empirically show in the setting of SDF fitting, this recommended sampling rate is sufficient to secure accurate fitting results.
arXiv Detail & Related papers (2024-01-02T10:51:52Z) - Parameter and Computation Efficient Transfer Learning for
Vision-Language Pre-trained Models [79.34513906324727]
In this paper, we aim at parameter and efficient transfer learning (PCETL) for vision-language pre-trained models.
We propose a novel dynamic architecture skipping (DAS) approach towards effective PCETL.
arXiv Detail & Related papers (2023-09-04T09:34:33Z) - NTK-approximating MLP Fusion for Efficient Language Model Fine-tuning [40.994306592119266]
Fine-tuning a pre-trained language model (PLM) emerges as the predominant strategy in many natural language processing applications.
Some general approaches (e.g. quantization and distillation) have been widely studied to reduce the compute/memory of PLM fine-tuning.
We propose to coin a lightweight PLM through NTK-approximating modules in fusion.
arXiv Detail & Related papers (2023-07-18T03:12:51Z) - Sigma-Delta and Distributed Noise-Shaping Quantization Methods for
Random Fourier Features [73.25551965751603]
We prove that our quantized RFFs allow a high accuracy approximation of the underlying kernels.
We show that the quantized RFFs can be further compressed, yielding an excellent trade-off between memory use and accuracy.
We empirically show by testing the performance of our methods on several machine learning tasks that our method compares favorably to other state of the art quantization methods in this context.
arXiv Detail & Related papers (2021-06-04T17:24:47Z) - Spectral Tensor Train Parameterization of Deep Learning Layers [136.4761580842396]
We study low-rank parameterizations of weight matrices with embedded spectral properties in the Deep Learning context.
We show the effects of neural network compression in the classification setting and both compression and improved stability training in the generative adversarial training setting.
arXiv Detail & Related papers (2021-03-07T00:15:44Z) - Fourier Features Let Networks Learn High Frequency Functions in Low
Dimensional Domains [69.62456877209304]
We show that passing input points through a simple Fourier feature mapping enables a multilayer perceptron to learn high-frequency functions.
Results shed light on advances in computer vision and graphics that achieve state-of-the-art results.
arXiv Detail & Related papers (2020-06-18T17:59:11Z) - Learning to Learn Kernels with Variational Random Features [118.09565227041844]
We introduce kernels with random Fourier features in the meta-learning framework to leverage their strong few-shot learning ability.
We formulate the optimization of MetaVRF as a variational inference problem.
We show that MetaVRF delivers much better, or at least competitive, performance compared to existing meta-learning alternatives.
arXiv Detail & Related papers (2020-06-11T18:05:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.