Learning Green's Function Efficiently Using Low-Rank Approximations
- URL: http://arxiv.org/abs/2308.00350v1
- Date: Tue, 1 Aug 2023 07:43:46 GMT
- Title: Learning Green's Function Efficiently Using Low-Rank Approximations
- Authors: Kishan Wimalawarne, Taiji Suzuki, Sophie Langer
- Abstract summary: A practical limitation of using deep learning for the Green's function is the repeated computationally expensive Monte-Carlo integral approximations.
We propose to learn the Green's function by low-rank decomposition, which results in a novel architecture to remove redundant computations.
- Score: 44.46178415547532
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning the Green's function using deep learning models enables to solve
different classes of partial differential equations. A practical limitation of
using deep learning for the Green's function is the repeated computationally
expensive Monte-Carlo integral approximations. We propose to learn the Green's
function by low-rank decomposition, which results in a novel architecture to
remove redundant computations by separate learning with domain data for
evaluation and Monte-Carlo samples for integral approximation. Using
experiments we show that the proposed method improves computational time
compared to MOD-Net while achieving comparable accuracy compared to both PINNs
and MOD-Net.
Related papers
- Green Multigrid Network [6.397295511397678]
GreenLearning networks (GL) learn Green's function in physical space, making them an interpretable model for capturing unknown solution operators of partial differential equations (PDEs)
We propose a framework named Green Multigrid networks (GreenMGNet), an operator learning algorithm designed for a class of singularityally smooth Green's functions.
Compared with the pioneering GL, the new framework presents itself with better accuracy and efficiency, thereby achieving a significant improvement.
arXiv Detail & Related papers (2024-07-04T03:02:10Z) - Symbolic Metamodels for Interpreting Black-boxes Using Primitive
Functions [15.727276506140878]
One approach for interpreting black-box machine learning models is to find a global approximation of the model using simple interpretable functions.
In this work, we propose a new method for finding interpretable metamodels.
arXiv Detail & Related papers (2023-02-09T17:30:43Z) - Data-driven discovery of Green's functions [0.0]
This thesis introduces theoretical results and deep learning algorithms to learn Green's functions associated with linear partial differential equations.
The construction connects the fields of PDE learning and numerical linear algebra.
Rational neural networks (NNs) are introduced and consist of neural networks with trainable rational activation functions.
arXiv Detail & Related papers (2022-10-28T09:41:50Z) - Shapley-NAS: Discovering Operation Contribution for Neural Architecture
Search [96.20505710087392]
We propose a Shapley value based method to evaluate operation contribution (Shapley-NAS) for neural architecture search.
We show that our method outperforms the state-of-the-art methods by a considerable margin with light search cost.
arXiv Detail & Related papers (2022-06-20T14:41:49Z) - BCFNet: A Balanced Collaborative Filtering Network with Attention
Mechanism [106.43103176833371]
Collaborative Filtering (CF) based recommendation methods have been widely studied.
We propose a novel recommendation model named Balanced Collaborative Filtering Network (BCFNet)
In addition, an attention mechanism is designed to better capture the hidden information within implicit feedback and strengthen the learning ability of the neural network.
arXiv Detail & Related papers (2021-03-10T14:59:23Z) - Deep Reinforcement Learning of Graph Matching [63.469961545293756]
Graph matching (GM) under node and pairwise constraints has been a building block in areas from optimization to computer vision.
We present a reinforcement learning solver for GM i.e. RGM that seeks the node correspondence between pairwise graphs.
Our method differs from the previous deep graph matching model in the sense that they are focused on the front-end feature extraction and affinity function learning.
arXiv Detail & Related papers (2020-12-16T13:48:48Z) - Efficient semidefinite-programming-based inference for binary and
multi-class MRFs [83.09715052229782]
We propose an efficient method for computing the partition function or MAP estimate in a pairwise MRF.
We extend semidefinite relaxations from the typical binary MRF to the full multi-class setting, and develop a compact semidefinite relaxation that can again be solved efficiently using the solver.
arXiv Detail & Related papers (2020-12-04T15:36:29Z) - Learning outside the Black-Box: The pursuit of interpretable models [78.32475359554395]
This paper proposes an algorithm that produces a continuous global interpretation of any given continuous black-box function.
Our interpretation represents a leap forward from the previous state of the art.
arXiv Detail & Related papers (2020-11-17T12:39:44Z) - Fast Reinforcement Learning with Incremental Gaussian Mixture Models [0.0]
An online and incremental algorithm capable of learning from a single pass through data, called Incremental Gaussian Mixture Network (IGMN), was employed as a sample-efficient function approximator for the joint state and Q-values space.
Results are analyzed to explain the properties of the obtained algorithm, and it is observed that the use of the IGMN function approximator brings some important advantages to reinforcement learning in relation to conventional neural networks trained by gradient descent methods.
arXiv Detail & Related papers (2020-11-02T03:18:15Z) - Deep Learning with Functional Inputs [0.0]
We present a methodology for integrating functional data into feed-forward neural networks.
A by-product of the method is a set of dynamic functional weights that can be visualized during the optimization process.
The model is shown to perform well in a number of contexts including prediction of new data and recovery of the true underlying functional weights.
arXiv Detail & Related papers (2020-06-17T01:23:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.