Weights initialization of neural networks for function approximation
- URL: http://arxiv.org/abs/2510.08780v1
- Date: Thu, 09 Oct 2025 19:56:26 GMT
- Title: Weights initialization of neural networks for function approximation
- Authors: Xinwen Hu, Yunqing Huang, Nianyu Yi, Peimeng Yin,
- Abstract summary: Neural network-based function approximation plays a pivotal role in the advancement of scientific computing and machine learning.<n>We propose a reusable framework based on on basis function pretraining.<n>In this approach, basis neural networks are first trained to approximate families of structural correspondences on a reference domain.<n>Their learned parameters are then used to initialize networks for more complex target functions.
- Score: 0.9099663022952497
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural network-based function approximation plays a pivotal role in the advancement of scientific computing and machine learning. Yet, training such models faces several challenges: (i) each target function often requires training a new model from scratch; (ii) performance is highly sensitive to architectural and hyperparameter choices; and (iii) models frequently generalize poorly beyond the training domain. To overcome these challenges, we propose a reusable initialization framework based on basis function pretraining. In this approach, basis neural networks are first trained to approximate families of polynomials on a reference domain. Their learned parameters are then used to initialize networks for more complex target functions. To enhance adaptability across arbitrary domains, we further introduce a domain mapping mechanism that transforms inputs into the reference domain, thereby preserving structural correspondence with the pretrained models. Extensive numerical experiments in one- and two-dimensional settings demonstrate substantial improvements in training efficiency, generalization, and model transferability, highlighting the promise of initialization-based strategies for scalable and modular neural function approximation. The full code is made publicly available on Gitee.
Related papers
- A Theory of How Pretraining Shapes Inductive Bias in Fine-Tuning [51.505728136705564]
We develop an analytical theory of the pretraining-fine-tuning pipeline in diagonal linear networks.<n>We find that different initialization choices place the network into four distinct fine-tuning regimes.<n>A smaller scale in earlier layers enables the network to both reuse and refine its features, leading to superior generalization.
arXiv Detail & Related papers (2026-02-23T17:19:33Z) - Improving Set Function Approximation with Quasi-Arithmetic Neural Networks [23.73257235603082]
We propose quasi-arithmetic neural networks (QUANNs)<n>QUANNs are universal approximators for a broad class of common set-function decompositions.<n>We provide a theoretical analysis showing that, QUANNs are universal approximators for a broad class of common set-function decompositions.
arXiv Detail & Related papers (2026-02-04T18:36:31Z) - Function regression using the forward forward training and inferring paradigm [0.0]
Forward-Forward learning algorithm is a novel approach for training neural networks without backpropagation.<n>This paper introduces a new methodology for approximating functions (function regression) using the Forward-Forward algorithm.
arXiv Detail & Related papers (2025-10-08T08:41:14Z) - Neural Network Reprogrammability: A Unified Theme on Model Reprogramming, Prompt Tuning, and Prompt Instruction [55.914891182214475]
We introduce neural network reprogrammability as a unifying framework for model adaptation.<n>We present a taxonomy that categorizes such information manipulation approaches across four key dimensions.<n>We also analyze remaining technical challenges and ethical considerations.
arXiv Detail & Related papers (2025-06-05T05:42:27Z) - Compelling ReLU Networks to Exhibit Exponentially Many Linear Regions at Initialization and During Training [1.7205106391379021]
In a neural network with ReLU activations, the number of piecewise linear regions in the output can grow exponentially with depth.<n>We introduce a novel parameterization of depth, depthdd, that leads to a further multidimensional training approach.
arXiv Detail & Related papers (2023-11-29T19:09:48Z) - The limitation of neural nets for approximation and optimization [0.0]
We are interested in assessing the use of neural networks as surrogate models to approximate and minimize objective functions in optimization problems.
Our study begins by determining the best activation function for approximating the objective functions of popular nonlinear optimization test problems.
arXiv Detail & Related papers (2023-11-21T00:21:15Z) - Efficient and Flexible Neural Network Training through Layer-wise Feedback Propagation [49.44309457870649]
Layer-wise Feedback feedback (LFP) is a novel training principle for neural network-like predictors.<n>LFP decomposes a reward to individual neurons based on their respective contributions.<n>Our method then implements a greedy reinforcing approach helpful parts of the network and weakening harmful ones.
arXiv Detail & Related papers (2023-08-23T10:48:28Z) - Composable Function-preserving Expansions for Transformer Architectures [2.579908688646812]
Training state-of-the-art neural networks requires a high cost in terms of compute and time.
We propose six composable transformations to incrementally increase the size of transformer-based neural networks.
arXiv Detail & Related papers (2023-08-11T12:27:22Z) - Learning to Augment via Implicit Differentiation for Domain
Generalization [107.9666735637355]
Domain generalization (DG) aims to overcome the problem by leveraging multiple source domains to learn a domain-generalizable model.
In this paper, we propose a novel augmentation-based DG approach, dubbed AugLearn.
AugLearn shows effectiveness on three standard DG benchmarks, PACS, Office-Home and Digits-DG.
arXiv Detail & Related papers (2022-10-25T18:51:51Z) - Acceleration techniques for optimization over trained neural network
ensembles [1.0323063834827415]
We study optimization problems where the objective function is modeled through feedforward neural networks with rectified linear unit activation.
We present a mixed-integer linear program based on existing popular big-$M$ formulations for optimizing over a single neural network.
arXiv Detail & Related papers (2021-12-13T20:50:54Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.