The covariance matrix of Green's functions and its application to
machine learning
- URL: http://arxiv.org/abs/2004.06481v1
- Date: Tue, 14 Apr 2020 13:26:01 GMT
- Title: The covariance matrix of Green's functions and its application to
machine learning
- Authors: Tomoko Nagai
- Abstract summary: We first survey Green's function for the Dirichlet boundary value problem of 2nd order linear ordinary differential equation.
We consider a covariance matrix composed of the normalized Green's function, which is regarded as aprobability density function.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, a regression algorithm based on Green's function theory is
proposed and implemented. We first survey Green's function for the Dirichlet
boundary value problem of 2nd order linear ordinary differential equation,
which is a reproducing kernel of a suitable Hilbert space. We next consider a
covariance matrix composed of the normalized Green's function, which is
regarded as aprobability density function. By supporting Bayesian approach, the
covariance matrix gives predictive distribution, which has the predictive mean
$\mu$ and the confidence interval [$\mu$-2s, $\mu$+2s], where s stands for a
standard deviation.
Related papers
- Generalizing Stochastic Smoothing for Differentiation and Gradient Estimation [59.86921150579892]
We deal with the problem of gradient estimation for differentiable relaxations of algorithms, operators, simulators, and other non-differentiable functions.
We develop variance reduction strategies for differentiable sorting and ranking, differentiable shortest-paths on graphs, differentiable rendering for pose estimation, as well as differentiable cryo-ET simulations.
arXiv Detail & Related papers (2024-10-10T17:10:00Z) - Theoretical Guarantees for Variational Inference with Fixed-Variance Mixture of Gaussians [27.20127082606962]
Variational inference (VI) is a popular approach in Bayesian inference.
This work aims to contribute to the theoretical study of VI in the non-Gaussian case.
arXiv Detail & Related papers (2024-06-06T12:38:59Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - Learning Domain-Independent Green's Function For Elliptic Partial
Differential Equations [0.0]
Green's function characterizes a partial differential equation (PDE) and maps its solution in the entire domain as integrals.
We propose a novel boundary integral network to learn the domain-independent Green's function, referred to as BIN-G.
We demonstrate that our numerical scheme enables fast training and accurate evaluation of the Green's function for PDEs with variable coefficients.
arXiv Detail & Related papers (2024-01-30T17:00:22Z) - Gradient-Free Methods for Deterministic and Stochastic Nonsmooth
Nonconvex Optimization [94.19177623349947]
Non-smooth non optimization problems emerge in machine learning and business making.
Two core challenges impede the development of efficient methods with finitetime convergence guarantee.
Two-phase versions of GFM and SGFM are also proposed and proven to achieve improved large-deviation results.
arXiv Detail & Related papers (2022-09-12T06:53:24Z) - BI-GreenNet: Learning Green's functions by boundary integral network [14.008606361378149]
Green's function plays a significant role in both theoretical analysis and numerical computing of partial differential equations.
We develop a new method for computing Green's function with high accuracy.
arXiv Detail & Related papers (2022-04-28T01:42:35Z) - Optimal policy evaluation using kernel-based temporal difference methods [78.83926562536791]
We use kernel Hilbert spaces for estimating the value function of an infinite-horizon discounted Markov reward process.
We derive a non-asymptotic upper bound on the error with explicit dependence on the eigenvalues of the associated kernel operator.
We prove minimax lower bounds over sub-classes of MRPs.
arXiv Detail & Related papers (2021-09-24T14:48:20Z) - Statistics of Green's functions on a disordered Cayley tree and the
validity of forward scattering approximation [0.0]
The accuracy of the forward scattering approximation for two-point Green's functions of the Anderson localization model on the Cayley tree is studied.
A relationship between the moments of the Green's function and the largest eigenvalue of the linearized transfer-matrix equation is proved.
The new large-disorder approximation for this eigenvalue is derived and its accuracy is established.
arXiv Detail & Related papers (2021-08-23T18:00:02Z) - Unbiased Estimation Equation under $f$-Separable Bregman Distortion
Measures [0.3553493344868413]
We discuss unbiased estimation equations in a class of objective function using a monotonically increasing function $f$ and Bregman divergence.
The choice of the function $f$ gives desirable properties such as robustness against outliers.
In this study, we clarify the combination of Bregman divergence, statistical model, and function $f$ in which the bias correction term vanishes.
arXiv Detail & Related papers (2020-10-23T10:33:55Z) - Uncertainty Inspired RGB-D Saliency Detection [70.50583438784571]
We propose the first framework to employ uncertainty for RGB-D saliency detection by learning from the data labeling process.
Inspired by the saliency data labeling process, we propose a generative architecture to achieve probabilistic RGB-D saliency detection.
Results on six challenging RGB-D benchmark datasets show our approach's superior performance in learning the distribution of saliency maps.
arXiv Detail & Related papers (2020-09-07T13:01:45Z) - A Random Matrix Analysis of Random Fourier Features: Beyond the Gaussian
Kernel, a Precise Phase Transition, and the Corresponding Double Descent [85.77233010209368]
This article characterizes the exacts of random Fourier feature (RFF) regression, in the realistic setting where the number of data samples $n$ is all large and comparable.
This analysis also provides accurate estimates of training and test regression errors for large $n,p,N$.
arXiv Detail & Related papers (2020-06-09T02:05:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.