L-HYDRA: Multi-Head Physics-Informed Neural Networks
- URL: http://arxiv.org/abs/2301.02152v1
- Date: Thu, 5 Jan 2023 16:54:01 GMT
- Title: L-HYDRA: Multi-Head Physics-Informed Neural Networks
- Authors: Zongren Zou and George Em Karniadakis
- Abstract summary: We construct multi-head physics-informed neural networks (MH-PINNs) as potent tool for multi-task learning (MTL), generative modeling, and few-shot learning.
MH-PINNs connect multiple functions/tasks via a shared body as the basis functions as well as a shared distribution for the head.
We demonstrate the effectiveness of MH-PINNs in five benchmarks, investigating also the possibility of synergistic learning in regression analysis.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce multi-head neural networks (MH-NNs) to physics-informed machine
learning, which is a type of neural networks (NNs) with all nonlinear hidden
layers as the body and multiple linear output layers as multi-head. Hence, we
construct multi-head physics-informed neural networks (MH-PINNs) as a potent
tool for multi-task learning (MTL), generative modeling, and few-shot learning
for diverse problems in scientific machine learning (SciML). MH-PINNs connect
multiple functions/tasks via a shared body as the basis functions as well as a
shared distribution for the head. The former is accomplished by solving
multiple tasks with MH-PINNs with each head independently corresponding to each
task, while the latter by employing normalizing flows (NFs) for density
estimate and generative modeling. To this end, our method is a two-stage
method, and both stages can be tackled with standard deep learning tools of
NNs, enabling easy implementation in practice. MH-PINNs can be used for various
purposes, such as approximating stochastic processes, solving multiple tasks
synergistically, providing informative prior knowledge for downstream few-shot
learning tasks such as meta-learning and transfer learning, learning
representative basis functions, and uncertainty quantification. We demonstrate
the effectiveness of MH-PINNs in five benchmarks, investigating also the
possibility of synergistic learning in regression analysis. We name the
open-source code "Lernaean Hydra" (L-HYDRA), since this mythical creature
possessed many heads for performing important multiple tasks, as in the
proposed method.
Related papers
- Towards Scalable and Versatile Weight Space Learning [51.78426981947659]
This paper introduces the SANE approach to weight-space learning.
Our method extends the idea of hyper-representations towards sequential processing of subsets of neural network weights.
arXiv Detail & Related papers (2024-06-14T13:12:07Z) - Fully Spiking Actor Network with Intra-layer Connections for
Reinforcement Learning [51.386945803485084]
We focus on the task where the agent needs to learn multi-dimensional deterministic policies to control.
Most existing spike-based RL methods take the firing rate as the output of SNNs, and convert it to represent continuous action space (i.e., the deterministic policy) through a fully-connected layer.
To develop a fully spiking actor network without any floating-point matrix operations, we draw inspiration from the non-spiking interneurons found in insects.
arXiv Detail & Related papers (2024-01-09T07:31:34Z) - Sparse Multitask Learning for Efficient Neural Representation of Motor
Imagery and Execution [30.186917337606477]
We introduce a sparse multitask learning framework for motor imagery (MI) and motor execution (ME) tasks.
Given a dual-task CNN model for MI-ME classification, we apply a saliency-based sparsification approach to prune superfluous connections.
Our results indicate that this tailored sparsity can mitigate the overfitting problem and improve the test performance with small amount of data.
arXiv Detail & Related papers (2023-12-10T09:06:16Z) - Randomly Weighted Neuromodulation in Neural Networks Facilitates
Learning of Manifolds Common Across Tasks [1.9580473532948401]
Geometric Sensitive Hashing functions are neural network models that learn class-specific manifold geometry in supervised learning.
We show that a randomly weighted neural network with a neuromodulation system can realize this function.
arXiv Detail & Related papers (2023-11-17T15:22:59Z) - Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks [69.38572074372392]
We present the first results proving that feature learning occurs during training with a nonlinear model on multiple tasks.
Our key insight is that multi-task pretraining induces a pseudo-contrastive loss that favors representations that align points that typically have the same label across tasks.
arXiv Detail & Related papers (2023-07-13T16:39:08Z) - Neural Routing in Meta Learning [9.070747377130472]
We aim to improve the model performance of the current meta learning algorithms by selectively using only parts of the model conditioned on the input tasks.
In this work, we describe an approach that investigates task-dependent dynamic neuron selection in deep convolutional neural networks (CNNs) by leveraging the scaling factor in the batch normalization layer.
We find that the proposed approach, neural routing in meta learning (NRML), outperforms one of the well-known existing meta learning baselines on few-shot classification tasks.
arXiv Detail & Related papers (2022-10-14T16:31:24Z) - MT-SNN: Spiking Neural Network that Enables Single-Tasking of Multiple
Tasks [0.0]
We implement a multi-task spiking neural network (MT-SNN) that can learn two or more classification tasks while performing one task at a time.
The network is implemented using Intel's Lava platform for the Loihi2 neuromorphic chip.
arXiv Detail & Related papers (2022-08-02T15:17:07Z) - Multi-Task Neural Processes [105.22406384964144]
We develop multi-task neural processes, a new variant of neural processes for multi-task learning.
In particular, we propose to explore transferable knowledge from related tasks in the function space to provide inductive bias for improving each individual task.
Results demonstrate the effectiveness of multi-task neural processes in transferring useful knowledge among tasks for multi-task learning.
arXiv Detail & Related papers (2021-11-10T17:27:46Z) - One Network Fits All? Modular versus Monolithic Task Formulations in
Neural Networks [36.07011014271394]
We show that a single neural network is capable of simultaneously learning multiple tasks from a combined data set.
We study how the complexity of learning such combined tasks grows with the complexity of the task codes.
arXiv Detail & Related papers (2021-03-29T01:16:42Z) - Deep Multimodal Neural Architecture Search [178.35131768344246]
We devise a generalized deep multimodal neural architecture search (MMnas) framework for various multimodal learning tasks.
Given multimodal input, we first define a set of primitive operations, and then construct a deep encoder-decoder based unified backbone.
On top of the unified backbone, we attach task-specific heads to tackle different multimodal learning tasks.
arXiv Detail & Related papers (2020-04-25T07:00:32Z) - The Microsoft Toolkit of Multi-Task Deep Neural Networks for Natural
Language Understanding [97.85957811603251]
We present MT-DNN, an open-source natural language understanding (NLU) toolkit that makes it easy for researchers and developers to train customized deep learning models.
Built upon PyTorch and Transformers, MT-DNN is designed to facilitate rapid customization for a broad spectrum of NLU tasks.
A unique feature of MT-DNN is its built-in support for robust and transferable learning using the adversarial multi-task learning paradigm.
arXiv Detail & Related papers (2020-02-19T03:05:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.