Fourier-domain Variational Formulation and Its Well-posedness for
Supervised Learning
- URL: http://arxiv.org/abs/2012.03238v1
- Date: Sun, 6 Dec 2020 11:19:50 GMT
- Title: Fourier-domain Variational Formulation and Its Well-posedness for
Supervised Learning
- Authors: Tao Luo and Zheng Ma and Zhiwei Wang and Zhi-Qin John Xu and Yaoyu
Zhang
- Abstract summary: A supervised learning problem is to find a function in a hypothesis function space given values on isolated data points.
Inspired by the frequency principle in neural networks, we propose a Fourier-domain variational formulation for supervised learning problem.
- Score: 7.456846081669551
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A supervised learning problem is to find a function in a hypothesis function
space given values on isolated data points. Inspired by the frequency principle
in neural networks, we propose a Fourier-domain variational formulation for
supervised learning problem. This formulation circumvents the difficulty of
imposing the constraints of given values on isolated data points in continuum
modelling. Under a necessary and sufficient condition within our unified
framework, we establish the well-posedness of the Fourier-domain variational
problem, by showing a critical exponent depending on the data dimension. In
practice, a neural network can be a convenient way to implement our
formulation, which automatically satisfies the well-posedness condition.
Related papers
- Localized Gaussians as Self-Attention Weights for Point Clouds Correspondence [92.07601770031236]
We investigate semantically meaningful patterns in the attention heads of an encoder-only Transformer architecture.
We find that fixing the attention weights not only accelerates the training process but also enhances the stability of the optimization.
arXiv Detail & Related papers (2024-09-20T07:41:47Z) - Function Extrapolation with Neural Networks and Its Application for Manifolds [1.4579344926652844]
We train a neural network to incorporate prior knowledge of a function.
By carefully analyzing the problem, we obtain a bound on the error over the extrapolation domain.
arXiv Detail & Related papers (2024-05-17T06:15:26Z) - Towards stable real-world equation discovery with assessing
differentiating quality influence [52.2980614912553]
We propose alternatives to the commonly used finite differences-based method.
We evaluate these methods in terms of applicability to problems, similar to the real ones, and their ability to ensure the convergence of equation discovery algorithms.
arXiv Detail & Related papers (2023-11-09T23:32:06Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Fourier Sensitivity and Regularization of Computer Vision Models [11.79852671537969]
We study the frequency sensitivity characteristics of deep neural networks using a principled approach.
We find that computer vision models are consistently sensitive to particular frequencies dependent on the dataset, training method and architecture.
arXiv Detail & Related papers (2023-01-31T10:05:35Z) - A cusp-capturing PINN for elliptic interface problems [0.0]
We introduce a cusp-enforced level set function as an additional feature input to the network to retain the inherent solution properties.
The proposed neural network has the advantage of being mesh-free, so it can easily handle problems in irregular domains.
We conduct a series of numerical experiments to demonstrate the effectiveness of the cusp-capturing technique and the accuracy of the present network model.
arXiv Detail & Related papers (2022-10-16T03:05:18Z) - Adaptive Self-supervision Algorithms for Physics-informed Neural
Networks [59.822151945132525]
Physics-informed neural networks (PINNs) incorporate physical knowledge from the problem domain as a soft constraint on the loss function.
We study the impact of the location of the collocation points on the trainability of these models.
We propose a novel adaptive collocation scheme which progressively allocates more collocation points to areas where the model is making higher errors.
arXiv Detail & Related papers (2022-07-08T18:17:06Z) - Physics-Informed Neural Networks for Quantum Eigenvalue Problems [1.2891210250935146]
Eigenvalue problems are critical to several fields of science and engineering.
We use unsupervised neural networks for discovering eigenfunctions and eigenvalues for differential eigenvalue problems.
The network optimization is data-free and depends solely on the predictions of the neural network.
arXiv Detail & Related papers (2022-02-24T18:29:39Z) - Efficient Multidimensional Functional Data Analysis Using Marginal
Product Basis Systems [2.4554686192257424]
We propose a framework for learning continuous representations from a sample of multidimensional functional data.
We show that the resulting estimation problem can be solved efficiently by the tensor decomposition.
We conclude with a real data application in neuroimaging.
arXiv Detail & Related papers (2021-07-30T16:02:15Z) - Conditional physics informed neural networks [85.48030573849712]
We introduce conditional PINNs (physics informed neural networks) for estimating the solution of classes of eigenvalue problems.
We show that a single deep neural network can learn the solution of partial differential equations for an entire class of problems.
arXiv Detail & Related papers (2021-04-06T18:29:14Z) - Neural Control Variates [71.42768823631918]
We show that a set of neural networks can face the challenge of finding a good approximation of the integrand.
We derive a theoretically optimal, variance-minimizing loss function, and propose an alternative, composite loss for stable online training in practice.
Specifically, we show that the learned light-field approximation is of sufficient quality for high-order bounces, allowing us to omit the error correction and thereby dramatically reduce the noise at the cost of negligible visible bias.
arXiv Detail & Related papers (2020-06-02T11:17:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.