NeuralUQ: A comprehensive library for uncertainty quantification in
neural differential equations and operators
- URL: http://arxiv.org/abs/2208.11866v1
- Date: Thu, 25 Aug 2022 04:28:18 GMT
- Title: NeuralUQ: A comprehensive library for uncertainty quantification in
neural differential equations and operators
- Authors: Zongren Zou, Xuhui Meng, Apostolos F Psaros, and George Em Karniadakis
- Abstract summary: Uncertainty quantification (UQ) in machine learning is currently drawing increasing research interest.
We present an open-source Python library, termed NeuralUQ, for employing UQ methods for SciML in a convenient and structured manner.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Uncertainty quantification (UQ) in machine learning is currently drawing
increasing research interest, driven by the rapid deployment of deep neural
networks across different fields, such as computer vision, natural language
processing, and the need for reliable tools in risk-sensitive applications.
Recently, various machine learning models have also been developed to tackle
problems in the field of scientific computing with applications to
computational science and engineering (CSE). Physics-informed neural networks
and deep operator networks are two such models for solving partial differential
equations and learning operator mappings, respectively. In this regard, a
comprehensive study of UQ methods tailored specifically for scientific machine
learning (SciML) models has been provided in [45]. Nevertheless, and despite
their theoretical merit, implementations of these methods are not
straightforward, especially in large-scale CSE applications, hindering their
broad adoption in both research and industry settings. In this paper, we
present an open-source Python library (https://github.com/Crunch-UQ4MI), termed
NeuralUQ and accompanied by an educational tutorial, for employing UQ methods
for SciML in a convenient and structured manner. The library, designed for both
educational and research purposes, supports multiple modern UQ methods and
SciML models. It is based on a succinct workflow and facilitates flexible
employment and easy extensions by the users. We first present a tutorial of
NeuralUQ and subsequently demonstrate its applicability and efficiency in four
diverse examples, involving dynamical systems and high-dimensional parametric
and time-dependent PDEs.
Related papers
- Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Evaluation of machine learning architectures on the quantification of
epistemic and aleatoric uncertainties in complex dynamical systems [0.0]
Uncertainty Quantification (UQ) is a self assessed estimate of the model error.
We examine several machine learning techniques, including both Gaussian processes and a family UQ-augmented neural networks.
We evaluate UQ accuracy (distinct from model accuracy) using two metrics: the distribution of normalized residuals on validation data, and the distribution of estimated uncertainties.
arXiv Detail & Related papers (2023-06-27T02:35:25Z) - Deep learning applied to computational mechanics: A comprehensive
review, state of the art, and the classics [77.34726150561087]
Recent developments in artificial neural networks, particularly deep learning (DL), are reviewed in detail.
Both hybrid and pure machine learning (ML) methods are discussed.
History and limitations of AI are recounted and discussed, with particular attention at pointing out misstatements or misconceptions of the classics.
arXiv Detail & Related papers (2022-12-18T02:03:00Z) - SPAIC: A Spike-based Artificial Intelligence Computing Framework [22.133585707508963]
We present a Python based spiking neural network (SNN) simulation and training framework, aka SPAIC.
It aims to support brain-inspired model and algorithm researches integrated with features from both deep learning and neuroscience.
We provide a range of examples including neural circuits, deep SNN learning and neuromorphic applications.
arXiv Detail & Related papers (2022-07-26T08:57:42Z) - Neural Network Quantization with AI Model Efficiency Toolkit (AIMET) [15.439669159557253]
We present an overview of neural network quantization using AI Model Efficiency Toolkit (AIMET)
AIMET is a library of state-of-the-art quantization and compression algorithms designed to ease the effort required for model optimization.
We provide a practical guide to quantization via AIMET by covering PTQ and QAT, code examples and practical tips.
arXiv Detail & Related papers (2022-01-20T20:35:37Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Understanding Neural Code Intelligence Through Program Simplification [3.9704927572880253]
We propose a model-agnostic approach to identify critical input features for models in code intelligence systems.
Our approach, SIVAND, uses simplification techniques that reduce the size of input programs of a CI model.
We believe that SIVAND's extracted features may help understand neural CI systems' predictions and learned behavior.
arXiv Detail & Related papers (2021-06-07T05:44:29Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.