Orthogonal greedy algorithm for linear operator learning with shallow neural network
- URL: http://arxiv.org/abs/2501.02791v1
- Date: Mon, 06 Jan 2025 06:18:38 GMT
- Title: Orthogonal greedy algorithm for linear operator learning with shallow neural network
- Authors: Ye Lin, Jiwei Jia, Young Ju Lee, Ran Zhang,
- Abstract summary: We extend the application of greedy algorithms to the tasks of linear operator learning.
A novel greedy algorithm is developed for kernel estimation rate in a new semi-inner product.
We introduce the OGA for point-wise kernel estimation to further improve the approximation rate.
- Score: 7.9228092825552
- License:
- Abstract: Greedy algorithms, particularly the orthogonal greedy algorithm (OGA), have proven effective in training shallow neural networks for fitting functions and solving partial differential equations (PDEs). In this paper, we extend the application of OGA to the tasks of linear operator learning, which is equivalent to learning the kernel function through integral transforms. Firstly, a novel greedy algorithm is developed for kernel estimation rate in a new semi-inner product, which can be utilized to approximate the Green's function of linear PDEs from data. Secondly, we introduce the OGA for point-wise kernel estimation to further improve the approximation rate, achieving orders of accuracy improvement across various tasks and baseline models. In addition, we provide a theoretical analysis on the kernel estimation problem and the optimal approximation rates for both algorithms, establishing their efficacy and potential for future applications in PDEs and operator learning tasks.
Related papers
- Towards Gaussian Process for operator learning: an uncertainty aware resolution independent operator learning algorithm for computational mechanics [8.528817025440746]
This paper introduces a novel Gaussian Process (GP) based neural operator for solving parametric differential equations.
We propose a neural operator-embedded kernel'' wherein the GP kernel is formulated in the latent space learned using a neural operator.
Our results highlight the efficacy of this framework in solving complex PDEs while maintaining robustness in uncertainty estimation.
arXiv Detail & Related papers (2024-09-17T08:12:38Z) - Convergence of Implicit Gradient Descent for Training Two-Layer Physics-Informed Neural Networks [3.680127959836384]
implicit gradient descent (IGD) outperforms the common gradient descent (GD) in handling certain multi-scale problems.
We show that IGD converges a globally optimal solution at a linear convergence rate.
arXiv Detail & Related papers (2024-07-03T06:10:41Z) - A Deep-Genetic Algorithm (Deep-GA) Approach for High-Dimensional
Nonlinear Parabolic Partial Differential Equations [0.0]
We propose a new method, called a deep-genetic algorithm (deep-GA) to accelerate the performance of the so-called deep-BSDE method.
Recognizing the sensitivity of the solver to the initial guess selection, we embed a genetic algorithm (GA) into the solver to optimize the selection.
We show that our method provides comparable accuracy with significantly improved computational efficiency.
arXiv Detail & Related papers (2023-11-20T06:35:23Z) - Equation Discovery with Bayesian Spike-and-Slab Priors and Efficient Kernels [57.46832672991433]
We propose a novel equation discovery method based on Kernel learning and BAyesian Spike-and-Slab priors (KBASS)
We use kernel regression to estimate the target function, which is flexible, expressive, and more robust to data sparsity and noises.
We develop an expectation-propagation expectation-maximization algorithm for efficient posterior inference and function estimation.
arXiv Detail & Related papers (2023-10-09T03:55:09Z) - Unfolded proximal neural networks for robust image Gaussian denoising [7.018591019975253]
We propose a unified framework to build PNNs for the Gaussian denoising task, based on both the dual-FB and the primal-dual Chambolle-Pock algorithms.
We also show that accelerated versions of these algorithms enable skip connections in the associated NN layers.
arXiv Detail & Related papers (2023-08-06T15:32:16Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Random Features for the Neural Tangent Kernel [57.132634274795066]
We propose an efficient feature map construction of the Neural Tangent Kernel (NTK) of fully-connected ReLU network.
We show that dimension of the resulting features is much smaller than other baseline feature map constructions to achieve comparable error bounds both in theory and practice.
arXiv Detail & Related papers (2021-04-03T09:08:12Z) - Channel Assignment in Uplink Wireless Communication using Machine
Learning Approach [54.012791474906514]
This letter investigates a channel assignment problem in uplink wireless communication systems.
Our goal is to maximize the sum rate of all users subject to integer channel assignment constraints.
Due to high computational complexity, machine learning approaches are employed to obtain computational efficient solutions.
arXiv Detail & Related papers (2020-01-12T15:54:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.