Physics-informed neural networks for pathloss prediction
- URL: http://arxiv.org/abs/2211.12986v2
- Date: Thu, 14 Dec 2023 12:36:21 GMT
- Title: Physics-informed neural networks for pathloss prediction
- Authors: Steffen Limmer, Alberto Martinez Alba, Nicola Michailow
- Abstract summary: It is shown that the solution to a proposed learning problem improves generalization and prediction quality with a small number of neural network layers and parameters.
The physics-informed formulation allows training and prediction with a small amount of training data which makes it appealing for a wide range of practical pathloss prediction scenarios.
- Score: 0.9208007322096533
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces a physics-informed machine learning approach for
pathloss prediction. This is achieved by including in the training phase
simultaneously (i) physical dependencies between spatial loss field and (ii)
measured pathloss values in the field. It is shown that the solution to a
proposed learning problem improves generalization and prediction quality with a
small number of neural network layers and parameters. The latter leads to fast
inference times which are favorable for downstream tasks such as localization.
Moreover, the physics-informed formulation allows training and prediction with
a small amount of training data which makes it appealing for a wide range of
practical pathloss prediction scenarios.
Related papers
- Self-adaptive weights based on balanced residual decay rate for physics-informed neural networks and deep operator networks [1.0562108865927007]
Physics-informed deep learning has emerged as a promising alternative for solving partial differential equations.
For complex problems, training these networks can still be challenging, often resulting in unsatisfactory accuracy and efficiency.
We propose a point-wise adaptive weighting method that balances the residual decay rate across different training points.
arXiv Detail & Related papers (2024-06-28T00:53:48Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Learning from Predictions: Fusing Training and Autoregressive Inference
for Long-Term Spatiotemporal Forecasts [4.068387278512612]
We propose the Scheduled Autoregressive BPTT (BPTT-SA) algorithm for predicting complex systems.
Our results show that BPTT-SA effectively reduces iterative error propagation in Convolutional RNNs and Convolutional Autoencoder RNNs.
arXiv Detail & Related papers (2023-02-22T02:46:54Z) - Physics-informed neural networks for gravity currents reconstruction
from limited data [0.0]
The present work investigates the use of physics-informed neural networks (PINNs) for the 3D reconstruction of unsteady gravity currents from limited data.
In the PINN context, the flow fields are reconstructed by training a neural network whose objective function penalizes the mismatch between the network predictions and the observed data.
arXiv Detail & Related papers (2022-11-03T11:27:29Z) - Physics-informed neural networks for diffraction tomography [0.1199955563466263]
We propose a physics-informed neural network as the forward model for tomographic reconstructions of biological samples.
By training this network with the Helmholtz equation as a physical loss, we can predict the scattered field accurately.
arXiv Detail & Related papers (2022-07-28T16:56:50Z) - Scalable computation of prediction intervals for neural networks via
matrix sketching [79.44177623781043]
Existing algorithms for uncertainty estimation require modifying the model architecture and training procedure.
This work proposes a new algorithm that can be applied to a given trained neural network and produces approximate prediction intervals.
arXiv Detail & Related papers (2022-05-06T13:18:31Z) - Physics informed neural networks for continuum micromechanics [68.8204255655161]
Recently, physics informed neural networks have successfully been applied to a broad variety of problems in applied mathematics and engineering.
Due to the global approximation, physics informed neural networks have difficulties in displaying localized effects and strong non-linear solutions by optimization.
It is shown, that the domain decomposition approach is able to accurately resolve nonlinear stress, displacement and energy fields in heterogeneous microstructures obtained from real-world $mu$CT-scans.
arXiv Detail & Related papers (2021-10-14T14:05:19Z) - Predicting Training Time Without Training [120.92623395389255]
We tackle the problem of predicting the number of optimization steps that a pre-trained deep network needs to converge to a given value of the loss function.
We leverage the fact that the training dynamics of a deep network during fine-tuning are well approximated by those of a linearized model.
We are able to predict the time it takes to fine-tune a model to a given loss without having to perform any training.
arXiv Detail & Related papers (2020-08-28T04:29:54Z) - Binary Neural Networks: A Survey [126.67799882857656]
The binary neural network serves as a promising technique for deploying deep models on resource-limited devices.
The binarization inevitably causes severe information loss, and even worse, its discontinuity brings difficulty to the optimization of the deep network.
We present a survey of these algorithms, mainly categorized into the native solutions directly conducting binarization, and the optimized ones using techniques like minimizing the quantization error, improving the network loss function, and reducing the gradient error.
arXiv Detail & Related papers (2020-03-31T16:47:20Z) - Understanding the Effects of Data Parallelism and Sparsity on Neural
Network Training [126.49572353148262]
We study two factors in neural network training: data parallelism and sparsity.
Despite their promising benefits, understanding of their effects on neural network training remains elusive.
arXiv Detail & Related papers (2020-03-25T10:49:22Z) - Understanding and mitigating gradient pathologies in physics-informed
neural networks [2.1485350418225244]
This work focuses on the effectiveness of physics-informed neural networks in predicting outcomes of physical systems and discovering hidden physics from noisy data.
We present a learning rate annealing algorithm that utilizes gradient statistics during model training to balance the interplay between different terms in composite loss functions.
We also propose a novel neural network architecture that is more resilient to such gradient pathologies.
arXiv Detail & Related papers (2020-01-13T21:23:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.