Physics-guided deep learning for data scarcity
- URL: http://arxiv.org/abs/2211.15664v1
- Date: Thu, 24 Nov 2022 01:03:21 GMT
- Title: Physics-guided deep learning for data scarcity
- Authors: Jinshuai Bai, Laith Alzubaidi, Qingxia Wang, Ellen Kuhl, Mohammed
Bennamoun, Yuantong Gu
- Abstract summary: Physics-guided deep learning (PGDL) is a novel type of DL that can integrate physics laws to train neural networks.
It can be used for any systems that are controlled or governed by physics laws, such as mechanics, finance and medical applications.
In this review, the details of PGDL are elucidated, and a structured overview of PGDL with respect to data scarcity in various applications is presented.
- Score: 23.885971319823547
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data are the core of deep learning (DL), and the quality of data
significantly affects the performance of DL models. However, high-quality and
well-annotated databases are hard or even impossible to acquire for use in many
applications, such as structural risk estimation and medical diagnosis, which
is an essential barrier that blocks the applications of DL in real life.
Physics-guided deep learning (PGDL) is a novel type of DL that can integrate
physics laws to train neural networks. It can be used for any systems that are
controlled or governed by physics laws, such as mechanics, finance and medical
applications. It has been shown that, with the additional information provided
by physics laws, PGDL achieves great accuracy and generalisation when facing
data scarcity. In this review, the details of PGDL are elucidated, and a
structured overview of PGDL with respect to data scarcity in various
applications is presented, including physics, engineering and medical
applications. Moreover, the limitations and opportunities for current PGDL in
terms of data scarcity are identified, and the future outlook for PGDL is
discussed in depth.
Related papers
- Science-Informed Deep Learning (ScIDL) With Applications to Wireless Communications [11.472232944923558]
This article provides a tutorial on science-informed deep learning (ScIDL)
ScIDL aims to integrate existing scientific knowledge with DL techniques to develop more powerful algorithms.
We discuss both recent applications of ScIDL and potential future research directions in the field of wireless communications.
arXiv Detail & Related papers (2024-06-29T02:35:39Z) - PTPI-DL-ROMs: pre-trained physics-informed deep learning-based reduced order models for nonlinear parametrized PDEs [0.6827423171182154]
In this paper, we consider a major extension of POD-DL-ROMs by making them physics-informed.
We first complement POD-DL-ROMs with a trunk net architecture, endowing them with the ability to compute the problem's solution at every point in the spatial domain.
In particular, we take advantage of the few available data to develop a low-cost pre-training procedure.
arXiv Detail & Related papers (2024-05-14T12:46:12Z) - Serving Deep Learning Model in Relational Databases [70.53282490832189]
Serving deep learning (DL) models on relational data has become a critical requirement across diverse commercial and scientific domains.
We highlight three pivotal paradigms: The state-of-the-art DL-centric architecture offloads DL computations to dedicated DL frameworks.
The potential UDF-centric architecture encapsulates one or more tensor computations into User Defined Functions (UDFs) within the relational database management system (RDBMS)
arXiv Detail & Related papers (2023-10-07T06:01:35Z) - HDDL 2.1: Towards Defining a Formalism and a Semantics for Temporal HTN
Planning [64.07762708909846]
Real world applications need modelling rich and diverse automated planning problems.
hierarchical task network (HTN) formalism does not allow to represent planning problems with numerical and temporal constraints.
We propose to fill the gap between HDDL and these operational needs and to extend HDDL by taking inspiration from PDDL 2.1.
arXiv Detail & Related papers (2023-06-12T18:21:23Z) - Using Gradient to Boost the Generalization Performance of Deep Learning
Models for Fluid Dynamics [0.0]
We present a novel work to increase the generalization capabilities of Deep Learning.
Our strategy has shown good results towards a better generalization of DL networks.
arXiv Detail & Related papers (2022-10-09T10:20:09Z) - PID-GAN: A GAN Framework based on a Physics-informed Discriminator for
Uncertainty Quantification with Physics [2.4309139330334846]
In scientific applications, it is important to inform the learning of deep learning models with knowledge of physics to produce physically consistent and generalized solutions.
We propose a novel physics-informed GAN architecture, termed PID-GAN, where the knowledge of physics is used to inform the learning of both the generator and discriminator models.
We show that our proposed PID-GAN framework does not suffer from imbalance of generator gradients from multiple loss terms as compared to state-of-the-art.
arXiv Detail & Related papers (2021-06-06T00:12:57Z) - Model-Constrained Deep Learning Approaches for Inverse Problems [0.0]
Deep Learning (DL) is purely data-driven and does not require physics.
DL methods in their original forms are not capable of respecting the underlying mathematical models.
We present and provide intuitions for our formulations for general nonlinear problems.
arXiv Detail & Related papers (2021-05-25T16:12:39Z) - CogDL: A Comprehensive Library for Graph Deep Learning [55.694091294633054]
We present CogDL, a library for graph deep learning that allows researchers and practitioners to conduct experiments, compare methods, and build applications with ease and efficiency.
In CogDL, we propose a unified design for the training and evaluation of GNN models for various graph tasks, making it unique among existing graph learning libraries.
We develop efficient sparse operators for CogDL, enabling it to become the most competitive graph library for efficiency.
arXiv Detail & Related papers (2021-03-01T12:35:16Z) - GaNDLF: A Generally Nuanced Deep Learning Framework for Scalable
End-to-End Clinical Workflows in Medical Imaging [76.38169390121057]
We present the community-driven Generally Nuanced Deep Learning Framework (GaNDLF)
GaNDLF makes the mechanism of DL development, training, and inference more stable, reproducible, interpretable, and scalable.
We demonstrate the ability of GaNDLF to analyze both radiology and histology images, with built-in support for k-fold cross-validation.
arXiv Detail & Related papers (2021-02-26T02:24:52Z) - A Survey of Deep Active Learning [54.376820959917005]
Active learning (AL) attempts to maximize the performance gain of the model by marking the fewest samples.
Deep learning (DL) is greedy for data and requires a large amount of data supply to optimize massive parameters.
Deep active learning (DAL) has emerged.
arXiv Detail & Related papers (2020-08-30T04:28:31Z) - ECG-DelNet: Delineation of Ambulatory Electrocardiograms with Mixed
Quality Labeling Using Neural Networks [69.25956542388653]
Deep learning (DL) algorithms are gaining weight in academic and industrial settings.
We demonstrate DL can be successfully applied to low interpretative tasks by embedding ECG detection and delineation onto a segmentation framework.
The model was trained using PhysioNet's QT database, comprised of 105 ambulatory ECG recordings.
arXiv Detail & Related papers (2020-05-11T16:29:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.