Comparison of metaheuristics for the firebreak placement problem: a
simulation-based optimization approach
- URL: http://arxiv.org/abs/2311.17393v1
- Date: Wed, 29 Nov 2023 06:45:07 GMT
- Title: Comparison of metaheuristics for the firebreak placement problem: a
simulation-based optimization approach
- Authors: David Palacios-Meneses, Jaime Carrasco, Sebasti\'an D\'avila,
Maximiliano Mart\'inez, Rodrigo Mahaluf, and Andr\'es Weintraub
- Abstract summary: The problem of firebreak placement is crucial for fire prevention.
It is therefore necessary to consider the nature of fires, which are highly unpredictable from ignition to extinction.
We propose a solution approach for the problem from the perspective of simulation-based optimization.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The problem of firebreak placement is crucial for fire prevention, and its
effectiveness at landscape scale will depend on their ability to impede the
progress of future wildfires. To provide an adequate response, it is therefore
necessary to consider the stochastic nature of fires, which are highly
unpredictable from ignition to extinction. Thus, the placement of firebreaks
can be considered a stochastic optimization problem where: (1) the objective
function is to minimize the expected cells burnt of the landscape; (2) the
decision variables being the location of firebreaks; and (3) the random
variable being the spatial propagation/behavior of fires. In this paper, we
propose a solution approach for the problem from the perspective of
simulation-based optimization (SbO), where the objective function is not
available (a black-box function), but can be computed (and/or approximated) by
wildfire simulations. For this purpose, Genetic Algorithm and GRASP are
implemented. The final implementation yielded favorable results for the Genetic
Algorithm, demonstrating strong performance in scenarios with medium to high
operational capacity, as well as medium levels of stochasticity
Related papers
- Modelling wildland fire burn severity in California using a spatial
Super Learner approach [0.04188114563181614]
Given the increasing prevalence of wildland fires in the Western US, there is a critical need to develop tools to understand and accurately predict burn severity.
We develop a machine learning model to predict post-fire burn severity using pre-fire remotely sensed data.
When implemented, this model has the potential to the loss of human life, property, resources, and ecosystems in California.
arXiv Detail & Related papers (2023-11-25T22:09:14Z) - Prescribed Fire Modeling using Knowledge-Guided Machine Learning for
Land Management [2.158876211806538]
This paper introduces a novel machine learning (ML) framework that enables rapid emulation of prescribed fires.
By incorporating domain knowledge, the proposed method helps reduce physical inconsistencies in fuel density estimates in data-scarce scenarios.
We also overcome the problem of biased estimation of fire spread metrics by incorporating a hierarchical modeling structure.
arXiv Detail & Related papers (2023-10-02T19:38:04Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - A Neural Emulator for Uncertainty Estimation of Fire Propagation [12.067753469557598]
Wildfire is a highly process where small changes in environmental conditions (such as wind speed and direction) can lead to large changes in observed behaviour.
Traditional approach to quantify uncertainty in fire-front progression is to generate probability maps via ensembles of simulations.
We propose a new approach to directly estimate the likelihood of fire propagation given uncertainty in input parameters.
arXiv Detail & Related papers (2023-05-10T13:42:52Z) - Sarah Frank-Wolfe: Methods for Constrained Optimization with Best Rates and Practical Features [65.64276393443346]
The Frank-Wolfe (FW) method is a popular approach for solving optimization problems with structured constraints.
We present two new variants of the algorithms for minimization of the finite-sum gradient.
arXiv Detail & Related papers (2023-04-23T20:05:09Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - Generalizing Bayesian Optimization with Decision-theoretic Entropies [102.82152945324381]
We consider a generalization of Shannon entropy from work in statistical decision theory.
We first show that special cases of this entropy lead to popular acquisition functions used in BO procedures.
We then show how alternative choices for the loss yield a flexible family of acquisition functions.
arXiv Detail & Related papers (2022-10-04T04:43:58Z) - Wildfire risk forecast: An optimizable fire danger index [0.0]
Wildfire events have caused severe losses in many places around the world and are expected to increase with climate change.
Fire risk indices use weather forcing to make advanced predictions of the risk of fire.
Predictions of fire risk indices can be used to allocate resources in places with high risk.
We propose a novel implementation of one index (NFDRS IC) as a differentiable function in which one can optimize its internal parameters via gradient descent.
arXiv Detail & Related papers (2022-03-28T14:08:49Z) - An Efficient Algorithm for Deep Stochastic Contextual Bandits [10.298368632706817]
In contextual bandit problems, an agent selects an action based on certain observed context to maximize the reward over iterations.
Recently there have been a few studies using a deep neural network (DNN) to predict the expected reward for an action, and is trained by a gradient based method.
arXiv Detail & Related papers (2021-04-12T16:34:43Z) - Parallel Stochastic Mirror Descent for MDPs [72.75921150912556]
We consider the problem of learning the optimal policy for infinite-horizon Markov decision processes (MDPs)
Some variant of Mirror Descent is proposed for convex programming problems with Lipschitz-continuous functionals.
We analyze this algorithm in a general case and obtain an estimate of the convergence rate that does not accumulate errors during the operation of the method.
arXiv Detail & Related papers (2021-02-27T19:28:39Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.