Data-Driven Fire Modeling: Learning First Arrival Times and Model Parameters with Neural Networks
- URL: http://arxiv.org/abs/2408.10271v1
- Date: Fri, 16 Aug 2024 19:54:41 GMT
- Title: Data-Driven Fire Modeling: Learning First Arrival Times and Model Parameters with Neural Networks
- Authors: Xin Tong, Bryan Quaife,
- Abstract summary: We investigate the ability of neural networks to parameterize dynamics in fire science.
In particular, we investigate neural networks that map five key parameters in fire spread to the first arrival time.
For the inverse problem, we quantify the network's sensitivity in estimating each of the key parameters.
- Score: 12.416949154231714
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data-driven techniques are being increasingly applied to complement physics-based models in fire science. However, the lack of sufficiently large datasets continues to hinder the application of certain machine learning techniques. In this paper, we use simulated data to investigate the ability of neural networks to parameterize dynamics in fire science. In particular, we investigate neural networks that map five key parameters in fire spread to the first arrival time, and the corresponding inverse problem. By using simulated data, we are able to characterize the error, the required dataset size, and the convergence properties of these neural networks. For the inverse problem, we quantify the network's sensitivity in estimating each of the key parameters. The findings demonstrate the potential of machine learning in fire science, highlight the challenges associated with limited dataset sizes, and quantify the sensitivity of neural networks to estimate key parameters governing fire spread dynamics.
Related papers
- Assessing Neural Network Representations During Training Using
Noise-Resilient Diffusion Spectral Entropy [55.014926694758195]
Entropy and mutual information in neural networks provide rich information on the learning process.
We leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures.
We show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data.
arXiv Detail & Related papers (2023-12-04T01:32:42Z) - Persistence-based operators in machine learning [62.997667081978825]
We introduce a class of persistence-based neural network layers.
Persistence-based layers allow the users to easily inject knowledge about symmetries respected by the data, are equipped with learnable weights, and can be composed with state-of-the-art neural architectures.
arXiv Detail & Related papers (2022-12-28T18:03:41Z) - Robustness of Physics-Informed Neural Networks to Noise in Sensor Data [0.0]
PINNs have been shown to be an effective way of incorporating physics-based domain knowledge into neural network models.
In this work, we conduct a preliminary investigation of the robustness of physics-informed neural networks to the magnitude of noise in the data.
arXiv Detail & Related papers (2022-11-22T06:24:43Z) - Graph Neural Networks with Trainable Adjacency Matrices for Fault
Diagnosis on Multivariate Sensor Data [69.25738064847175]
It is necessary to consider the behavior of the signals in each sensor separately, to take into account their correlation and hidden relationships with each other.
The graph nodes can be represented as data from the different sensors, and the edges can display the influence of these data on each other.
It was proposed to construct a graph during the training of graph neural network. This allows to train models on data where the dependencies between the sensors are not known in advance.
arXiv Detail & Related papers (2022-10-20T11:03:21Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Scalable algorithms for physics-informed neural and graph networks [0.6882042556551611]
Physics-informed machine learning (PIML) has emerged as a promising new approach for simulating complex physical and biological systems.
In PIML, we can train such networks from additional information obtained by employing the physical laws and evaluating them at random points in the space-time domain.
We review some of the prevailing trends in embedding physics into machine learning, using physics-informed neural networks (PINNs) based primarily on feed-forward neural networks and automatic differentiation.
arXiv Detail & Related papers (2022-05-16T15:46:11Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - Mean-Field and Kinetic Descriptions of Neural Differential Equations [0.0]
In this work we focus on a particular class of neural networks, i.e. the residual neural networks.
We analyze steady states and sensitivity with respect to the parameters of the network, namely the weights and the bias.
A modification of the microscopic dynamics, inspired by residual neural networks, leads to a Fokker-Planck formulation of the network.
arXiv Detail & Related papers (2020-01-07T13:41:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.