Wildfire Forecasting with Satellite Images and Deep Generative Model
- URL: http://arxiv.org/abs/2208.09411v2
- Date: Mon, 22 Aug 2022 13:30:15 GMT
- Title: Wildfire Forecasting with Satellite Images and Deep Generative Model
- Authors: Thai-Nam Hoang and Sang Truong and Chris Schmidt
- Abstract summary: We use a series of wildfire images as a video to anticipate how the fire would behave in the future.
We introduce a novel temporal model whose dynamics are driven in a latent space.
Results will be compared towards various benchmarking models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Wildfire forecasting has been one of the most critical tasks that humanities
want to thrive. It plays a vital role in protecting human life. Wildfire
prediction, on the other hand, is difficult because of its stochastic and
chaotic properties. We tackled the problem by interpreting a series of wildfire
images as a video and used it to anticipate how the fire would behave in the
future. However, creating video prediction models that account for the inherent
uncertainty of the future is challenging. The bulk of published attempts is
based on stochastic image-autoregressive recurrent networks, which raises
various performance and application difficulties, such as computational cost
and limited efficiency on massive datasets. Another possibility is to use
entirely latent temporal models that combine frame synthesis and temporal
dynamics. However, due to design and training issues, no such model for
stochastic video prediction has yet been proposed in the literature. This paper
addresses these issues by introducing a novel stochastic temporal model whose
dynamics are driven in a latent space. It naturally predicts video dynamics by
allowing our lighter, more interpretable latent model to beat previous
state-of-the-art approaches on the GOES-16 dataset. Results will be compared
towards various benchmarking models.
Related papers
- GaussianPrediction: Dynamic 3D Gaussian Prediction for Motion Extrapolation and Free View Synthesis [71.24791230358065]
We introduce a novel framework that empowers 3D Gaussian representations with dynamic scene modeling and future scenario synthesis.
GaussianPrediction can forecast future states from any viewpoint, using video observations of dynamic scenes.
Our framework shows outstanding performance on both synthetic and real-world datasets, demonstrating its efficacy in predicting and rendering future environments.
arXiv Detail & Related papers (2024-05-30T06:47:55Z) - State-space Decomposition Model for Video Prediction Considering Long-term Motion Trend [3.910356300831074]
We propose a state-space decomposition video prediction model that decomposes the overall video frame generation into deterministic appearance prediction and motion prediction.
We infer the long-term motion trend from conditional frames to guide the generation of future frames that exhibit high consistency with the conditional frames.
arXiv Detail & Related papers (2024-04-17T17:19:48Z) - Predicting Long-horizon Futures by Conditioning on Geometry and Time [49.86180975196375]
We explore the task of generating future sensor observations conditioned on the past.
We leverage the large-scale pretraining of image diffusion models which can handle multi-modality.
We create a benchmark for video prediction on a diverse set of videos spanning indoor and outdoor scenes.
arXiv Detail & Related papers (2024-04-17T16:56:31Z) - STDiff: Spatio-temporal Diffusion for Continuous Stochastic Video
Prediction [20.701792842768747]
We propose a novel video prediction model, which has infinite-dimensional latent variables over the temporal domain.
Our model is able to achieve temporal continuous prediction, i.e., predicting in an unsupervised way, with an arbitrarily high frame rate.
arXiv Detail & Related papers (2023-12-11T16:12:43Z) - HARP: Autoregressive Latent Video Prediction with High-Fidelity Image
Generator [90.74663948713615]
We train an autoregressive latent video prediction model capable of predicting high-fidelity future frames.
We produce high-resolution (256x256) videos with minimal modification to existing models.
arXiv Detail & Related papers (2022-09-15T08:41:57Z) - Conditioned Human Trajectory Prediction using Iterative Attention Blocks [70.36888514074022]
We present a simple yet effective pedestrian trajectory prediction model aimed at pedestrians positions prediction in urban-like environments.
Our model is a neural-based architecture that can run several layers of attention blocks and transformers in an iterative sequential fashion.
We show that without explicit introduction of social masks, dynamical models, social pooling layers, or complicated graph-like structures, it is possible to produce on par results with SoTA models.
arXiv Detail & Related papers (2022-06-29T07:49:48Z) - FitVid: Overfitting in Pixel-Level Video Prediction [117.59339756506142]
We introduce a new architecture, named FitVid, which is capable of severe overfitting on the common benchmarks.
FitVid outperforms the current state-of-the-art models across four different video prediction benchmarks on four different metrics.
arXiv Detail & Related papers (2021-06-24T17:20:21Z) - Future Frame Prediction for Robot-assisted Surgery [57.18185972461453]
We propose a ternary prior guided variational autoencoder (TPG-VAE) model for future frame prediction in robotic surgical video sequences.
Besides content distribution, our model learns motion distribution, which is novel to handle the small movements of surgical tools.
arXiv Detail & Related papers (2021-03-18T15:12:06Z) - Future Frame Prediction of a Video Sequence [5.660207256468971]
The ability to predict, anticipate and reason about future events is the essence of intelligence.
The ability to predict, anticipate and reason about future events is the essence of intelligence.
arXiv Detail & Related papers (2020-08-31T15:31:02Z) - Stochastic Latent Residual Video Prediction [0.0]
This paper introduces a novel temporal model whose dynamics are governed in a latent space by a residual update rule.
It naturally models video dynamics as it allows our simpler, more interpretable, latent model to outperform prior state-of-the-art methods on challenging datasets.
arXiv Detail & Related papers (2020-02-21T10:44:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.