HyperDeepONet: learning operator with complex target function space
using the limited resources via hypernetwork
- URL: http://arxiv.org/abs/2312.15949v1
- Date: Tue, 26 Dec 2023 08:28:46 GMT
- Title: HyperDeepONet: learning operator with complex target function space
using the limited resources via hypernetwork
- Authors: Jae Yong Lee, Sung Woong Cho, Hyung Ju Hwang
- Abstract summary: This study proposes HyperDeepONet, which uses the expressive power of the hypernetwork to enable the learning of a complex operator with a smaller set of parameters.
We analyze the complexity of DeepONet and conclude that HyperDeepONet needs relatively lower complexity to obtain the desired accuracy for operator learning.
- Score: 14.93012615797081
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fast and accurate predictions for complex physical dynamics are a significant
challenge across various applications. Real-time prediction on
resource-constrained hardware is even more crucial in real-world problems. The
deep operator network (DeepONet) has recently been proposed as a framework for
learning nonlinear mappings between function spaces. However, the DeepONet
requires many parameters and has a high computational cost when learning
operators, particularly those with complex (discontinuous or non-smooth) target
functions. This study proposes HyperDeepONet, which uses the expressive power
of the hypernetwork to enable the learning of a complex operator with a smaller
set of parameters. The DeepONet and its variant models can be thought of as a
method of injecting the input function information into the target function.
From this perspective, these models can be viewed as a particular case of
HyperDeepONet. We analyze the complexity of DeepONet and conclude that
HyperDeepONet needs relatively lower complexity to obtain the desired accuracy
for operator learning. HyperDeepONet successfully learned various operators
with fewer computational resources compared to other benchmarks.
Related papers
- Separable Operator Networks [4.688862638563124]
Physics-informed DeepONets (PI-DeepONet) mitigate data scarcity but suffer from inefficient training processes.
We introduce Separable Operator Networks (SepONet), a novel framework that significantly enhances the efficiency of physics-informed operator learning.
arXiv Detail & Related papers (2024-07-15T21:43:41Z) - Learning in latent spaces improves the predictive accuracy of deep
neural operators [0.0]
L-DeepONet is an extension of standard DeepONet, which leverages latent representations of high-dimensional PDE input and output functions identified with suitable autoencoders.
We show that L-DeepONet outperforms the standard approach in terms of both accuracy and computational efficiency across diverse time-dependent PDEs.
arXiv Detail & Related papers (2023-04-15T17:13:09Z) - Multifidelity deep neural operators for efficient learning of partial
differential equations with application to fast inverse design of nanoscale
heat transport [2.512625172084287]
We develop a multifidelity neural operator based on a deep operator network (DeepONet)
A multifidelity DeepONet significantly reduces the required amount of high-fidelity data and achieves one order of magnitude smaller error when using the same amount of high-fidelity data.
We apply a multifidelity DeepONet to learn the phonon Boltzmann transport equation (BTE), a framework to compute nanoscale heat transport.
arXiv Detail & Related papers (2022-04-14T01:01:24Z) - MultiAuto-DeepONet: A Multi-resolution Autoencoder DeepONet for
Nonlinear Dimension Reduction, Uncertainty Quantification and Operator
Learning of Forward and Inverse Stochastic Problems [12.826754199680474]
A new data-driven method for operator learning of differential equations(SDE) is proposed in this paper.
The central goal is to solve forward and inverse problems more effectively using limited data.
arXiv Detail & Related papers (2022-04-07T03:53:49Z) - Enhanced DeepONet for Modeling Partial Differential Operators
Considering Multiple Input Functions [5.819397109258169]
A deep network operator (DeepONet) was proposed to model the general non-linear continuous operators for partial differential equations (PDE)
Existing DeepONet can only accept one input function, which limits its application.
We propose new Enhanced DeepONet or EDeepONet high-level neural network structure, in which two input functions are represented by two branch sub-networks.
Our numerical results on modeling two partial differential equation examples shows that the proposed enhanced DeepONet is about 7X-17X or about one order of magnitude more accurate than the fully connected neural network.
arXiv Detail & Related papers (2022-02-17T23:58:23Z) - HyperNP: Interactive Visual Exploration of Multidimensional Projection
Hyperparameters [61.354362652006834]
HyperNP is a scalable method that allows for real-time interactive exploration of projection methods by training neural network approximations.
We evaluate the performance of the HyperNP across three datasets in terms of performance and speed.
arXiv Detail & Related papers (2021-06-25T17:28:14Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - Phase Retrieval using Expectation Consistent Signal Recovery Algorithm
based on Hypernetwork [73.94896986868146]
Phase retrieval is an important component in modern computational imaging systems.
Recent advances in deep learning have opened up a new possibility for robust and fast PR.
We develop a novel framework for deep unfolding to overcome the existing limitations.
arXiv Detail & Related papers (2021-01-12T08:36:23Z) - On Function Approximation in Reinforcement Learning: Optimism in the
Face of Large State Spaces [208.67848059021915]
We study the exploration-exploitation tradeoff at the core of reinforcement learning.
In particular, we prove that the complexity of the function class $mathcalF$ characterizes the complexity of the function.
Our regret bounds are independent of the number of episodes.
arXiv Detail & Related papers (2020-11-09T18:32:22Z) - Accurate RGB-D Salient Object Detection via Collaborative Learning [101.82654054191443]
RGB-D saliency detection shows impressive ability on some challenge scenarios.
We propose a novel collaborative learning framework where edge, depth and saliency are leveraged in a more efficient way.
arXiv Detail & Related papers (2020-07-23T04:33:36Z) - Large-Scale Gradient-Free Deep Learning with Recursive Local
Representation Alignment [84.57874289554839]
Training deep neural networks on large-scale datasets requires significant hardware resources.
Backpropagation, the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize.
We propose a neuro-biologically-plausible alternative to backprop that can be used to train deep networks.
arXiv Detail & Related papers (2020-02-10T16:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.