Model Based and Physics Informed Deep Learning Neural Network Structures
- URL: http://arxiv.org/abs/2408.07104v1
- Date: Tue, 13 Aug 2024 07:28:38 GMT
- Title: Model Based and Physics Informed Deep Learning Neural Network Structures
- Authors: Ali Mohammad-Djafari, Ning Chu, Li Wang, Caifang Cai, Liang Yu,
- Abstract summary: Neural Networks (NN) has been used in many areas with great success.
One of the great difficulties is the choice of the NN's structure.
We consider this problem using model based signal and image processing and inverse problems methods.
- Score: 7.095119621199481
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural Networks (NN) has been used in many areas with great success. When a NN's structure (Model) is given, during the training steps, the parameters of the model are determined using an appropriate criterion and an optimization algorithm (Training). Then, the trained model can be used for the prediction or inference step (Testing). As there are also many hyperparameters, related to the optimization criteria and optimization algorithms, a validation step is necessary before its final use. One of the great difficulties is the choice of the NN's structure. Even if there are many "on the shelf" networks, selecting or proposing a new appropriate network for a given data, signal or image processing, is still an open problem. In this work, we consider this problem using model based signal and image processing and inverse problems methods. We classify the methods in five classes, based on: i) Explicit analytical solutions, ii) Transform domain decomposition, iii) Operator Decomposition, iv) Optimization algorithms unfolding, and v) Physics Informed NN methods (PINN). Few examples in each category are explained.
Related papers
- The Unreasonable Effectiveness of Solving Inverse Problems with Neural Networks [24.766470360665647]
We show that neural networks trained to learn solutions to inverse problems can find better solutions than classicals even on their training set.
Our findings suggest an alternative use for neural networks: rather than generalizing to new data for fast inference, they can also be used to find better solutions on known data.
arXiv Detail & Related papers (2024-08-15T12:38:10Z) - Differentiable Visual Computing for Inverse Problems and Machine
Learning [27.45555082573493]
Visual computing methods are used to analyze geometry, physically simulate solids, fluids, and other media, and render the world via optical techniques.
Deep learning (DL) allows for the construction of general algorithmic models, side stepping the need for a purely first principles-based approach to problem solving.
DL is powered by highly parameterized neural network architectures -- universal function approximators -- and gradient-based search algorithms.
arXiv Detail & Related papers (2023-11-21T23:02:58Z) - On Using Deep Learning Proxies as Forward Models in Deep Learning
Problems [5.478764356647437]
Recent works have demonstrated that physics-modelling can be approximated with neural networks.
We demonstrate that neural network approximations (NN-proxies) of such functions can lead to erroneous results.
In particular, we study the behavior of particle swarm optimization and genetic algorithm methods.
arXiv Detail & Related papers (2023-01-16T04:54:12Z) - AskewSGD : An Annealed interval-constrained Optimisation method to train
Quantized Neural Networks [12.229154524476405]
We develop a new algorithm, Annealed Skewed SGD - AskewSGD - for training deep neural networks (DNNs) with quantized weights.
Unlike algorithms with active sets and feasible directions, AskewSGD avoids projections or optimization under the entire feasible set.
Experimental results show that the AskewSGD algorithm performs better than or on par with state of the art methods in classical benchmarks.
arXiv Detail & Related papers (2022-11-07T18:13:44Z) - Optimization-Informed Neural Networks [0.6853165736531939]
We propose optimization-informed neural networks (OINN) to solve constrained nonlinear optimization problems.
In a nutshell, OINN transforms a CNLP into a neural network training problem.
The effectiveness of the proposed approach is demonstrated through a collection of classical problems.
arXiv Detail & Related papers (2022-10-05T09:28:55Z) - Neural Combinatorial Optimization: a New Player in the Field [69.23334811890919]
This paper presents a critical analysis on the incorporation of algorithms based on neural networks into the classical optimization framework.
A comprehensive study is carried out to analyse the fundamental aspects of such algorithms, including performance, transferability, computational cost and to larger-sized instances.
arXiv Detail & Related papers (2022-05-03T07:54:56Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - A Meta-Learning Approach to the Optimal Power Flow Problem Under
Topology Reconfigurations [69.73803123972297]
We propose a DNN-based OPF predictor that is trained using a meta-learning (MTL) approach.
The developed OPF-predictor is validated through simulations using benchmark IEEE bus systems.
arXiv Detail & Related papers (2020-12-21T17:39:51Z) - Pre-Trained Models for Heterogeneous Information Networks [57.78194356302626]
We propose a self-supervised pre-training and fine-tuning framework, PF-HIN, to capture the features of a heterogeneous information network.
PF-HIN consistently and significantly outperforms state-of-the-art alternatives on each of these tasks, on four datasets.
arXiv Detail & Related papers (2020-07-07T03:36:28Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.