An Introduction to Kernel and Operator Learning Methods for
Homogenization by Self-consistent Clustering Analysis
- URL: http://arxiv.org/abs/2212.00802v1
- Date: Thu, 1 Dec 2022 02:36:16 GMT
- Title: An Introduction to Kernel and Operator Learning Methods for
Homogenization by Self-consistent Clustering Analysis
- Authors: Owen Huang, Sourav Saha, Jiachen Guo, Wing Kam Liu
- Abstract summary: The article presents a thorough analysis on the mathematical underpinnings of the operator learning paradigm.
The proposed kernel operator learning method uses graph kernel networks to come up with a mechanistic reduced order method for multiscale homogenization.
- Score: 0.48747801442240574
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in operator learning theory have improved our knowledge about
learning maps between infinite dimensional spaces. However, for large-scale
engineering problems such as concurrent multiscale simulation for mechanical
properties, the training cost for the current operator learning methods is very
high. The article presents a thorough analysis on the mathematical
underpinnings of the operator learning paradigm and proposes a kernel learning
method that maps between function spaces. We first provide a survey of modern
kernel and operator learning theory, as well as discuss recent results and open
problems. From there, the article presents an algorithm to how we can
analytically approximate the piecewise constant functions on R for operator
learning. This implies the potential feasibility of success of neural operators
on clustered functions. Finally, a k-means clustered domain on the basis of a
mechanistic response is considered and the Lippmann-Schwinger equation for
micro-mechanical homogenization is solved. The article briefly discusses the
mathematics of previous kernel learning methods and some preliminary results
with those methods. The proposed kernel operator learning method uses graph
kernel networks to come up with a mechanistic reduced order method for
multiscale homogenization.
Related papers
- Operator Learning Using Random Features: A Tool for Scientific Computing [3.745868534225104]
Supervised operator learning centers on the use of training data to estimate maps between infinite-dimensional spaces.
This paper introduces the function-valued random features method.
It leads to a supervised operator learning architecture that is practical for nonlinear problems.
arXiv Detail & Related papers (2024-08-12T23:10:39Z) - Kernel Neural Operators (KNOs) for Scalable, Memory-efficient, Geometrically-flexible Operator Learning [15.050519590538634]
The Kernel Neural Operator (KNO) is a novel operator learning technique.
It uses deep kernel-based integral operators in conjunction with quadrature for function-space approximation of operators.
KNOs represent a new paradigm of low-memory, geometrically-flexible, deep operator learning.
arXiv Detail & Related papers (2024-06-30T19:28:12Z) - Operator Learning: Algorithms and Analysis [8.305111048568737]
Operator learning refers to the application of ideas from machine learning to approximate operators mapping between Banach spaces of functions.
This review focuses on neural operators, built on the success of deep neural networks in the approximation of functions defined on finite dimensional Euclidean spaces.
arXiv Detail & Related papers (2024-02-24T04:40:27Z) - On a class of geodesically convex optimization problems solved via
Euclidean MM methods [50.428784381385164]
We show how a difference of Euclidean convexization functions can be written as a difference of different types of problems in statistics and machine learning.
Ultimately, we helps the broader broader the broader the broader the broader the work.
arXiv Detail & Related papers (2022-06-22T23:57:40Z) - NeuralEF: Deconstructing Kernels by Deep Neural Networks [47.54733625351363]
Traditional nonparametric solutions based on the Nystr"om formula suffer from scalability issues.
Recent work has resorted to a parametric approach, i.e., training neural networks to approximate the eigenfunctions.
We show that these problems can be fixed by using a new series of objective functions that generalizes to space of supervised and unsupervised learning problems.
arXiv Detail & Related papers (2022-04-30T05:31:07Z) - Neural Operator: Learning Maps Between Function Spaces [75.93843876663128]
We propose a generalization of neural networks to learn operators, termed neural operators, that map between infinite dimensional function spaces.
We prove a universal approximation theorem for our proposed neural operator, showing that it can approximate any given nonlinear continuous operator.
An important application for neural operators is learning surrogate maps for the solution operators of partial differential equations.
arXiv Detail & Related papers (2021-08-19T03:56:49Z) - Learning outside the Black-Box: The pursuit of interpretable models [78.32475359554395]
This paper proposes an algorithm that produces a continuous global interpretation of any given continuous black-box function.
Our interpretation represents a leap forward from the previous state of the art.
arXiv Detail & Related papers (2020-11-17T12:39:44Z) - Kernel Methods and their derivatives: Concept and perspectives for the
Earth system sciences [8.226445359788402]
We show that it is possible to interpret the functions learned by various kernel methods is intuitive despite their complexity.
Specifically, we show that derivatives of these functions have a simple mathematical formulation, are easy to compute, and can be applied to many different problems.
arXiv Detail & Related papers (2020-07-29T09:36:42Z) - Mat\'ern Gaussian processes on Riemannian manifolds [81.15349473870816]
We show how to generalize the widely-used Mat'ern class of Gaussian processes.
We also extend the generalization from the Mat'ern to the widely-used squared exponential process.
arXiv Detail & Related papers (2020-06-17T21:05:42Z) - Stochastic Flows and Geometric Optimization on the Orthogonal Group [52.50121190744979]
We present a new class of geometrically-driven optimization algorithms on the orthogonal group $O(d)$.
We show that our methods can be applied in various fields of machine learning including deep, convolutional and recurrent neural networks, reinforcement learning, flows and metric learning.
arXiv Detail & Related papers (2020-03-30T15:37:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.