Differentiable Agent-Based Simulation for Gradient-Guided
Simulation-Based Optimization
- URL: http://arxiv.org/abs/2103.12476v1
- Date: Tue, 23 Mar 2021 11:58:21 GMT
- Title: Differentiable Agent-Based Simulation for Gradient-Guided
Simulation-Based Optimization
- Authors: Philipp Andelfinger
- Abstract summary: gradient estimation methods can be used to steer the optimization towards a local optimum.
In traffic signal timing optimization problems with high input dimension, the gradient-based methods exhibit substantially superior performance.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Simulation-based optimization using agent-based models is typically carried
out under the assumption that the gradient describing the sensitivity of the
simulation output to the input cannot be evaluated directly. To still apply
gradient-based optimization methods, which efficiently steer the optimization
towards a local optimum, gradient estimation methods can be employed. However,
many simulation runs are needed to obtain accurate estimates if the input
dimension is large. Automatic differentiation (AD) is a family of techniques to
compute gradients of general programs directly. Here, we explore the use of AD
in the context of time-driven agent-based simulations. By substituting common
discrete model elements such as conditional branching with smooth
approximations, we obtain gradient information across discontinuities in the
model logic. On the example of microscopic traffic models and an epidemics
model, we study the fidelity and overhead of the differentiable models, as well
as the convergence speed and solution quality achieved by gradient-based
optimization compared to gradient-free methods. In traffic signal timing
optimization problems with high input dimension, the gradient-based methods
exhibit substantially superior performance. Finally, we demonstrate that the
approach enables gradient-based training of neural network-controlled
simulation entities embedded in the model logic.
Related papers
- Automatic Gradient Estimation for Calibrating Crowd Models with Discrete Decision Making [0.0]
gradients governing the choice in candidate solutions are calculated from sampled simulation trajectories.
We consider the calibration of force-based crowd evacuation models based on the popular Social Force model.
arXiv Detail & Related papers (2024-04-06T16:48:12Z) - Model-Based Reparameterization Policy Gradient Methods: Theory and
Practical Algorithms [88.74308282658133]
Reization (RP) Policy Gradient Methods (PGMs) have been widely adopted for continuous control tasks in robotics and computer graphics.
Recent studies have revealed that, when applied to long-term reinforcement learning problems, model-based RP PGMs may experience chaotic and non-smooth optimization landscapes.
We propose a spectral normalization method to mitigate the exploding variance issue caused by long model unrolls.
arXiv Detail & Related papers (2023-10-30T18:43:21Z) - Smoothing Methods for Automatic Differentiation Across Conditional
Branches [0.0]
Smooth interpretation (SI) approximates the convolution of a program's output with a Gaussian kernel, thus smoothing its output in a principled manner.
We combine SI with automatic differentiation (AD) to efficiently compute gradients of smoothed programs.
We propose a novel Monte Carlo estimator that avoids the underlying assumptions by estimating the smoothed programs' gradients through a combination of AD and sampling.
arXiv Detail & Related papers (2023-10-05T15:08:37Z) - Neural Gradient Learning and Optimization for Oriented Point Normal
Estimation [53.611206368815125]
We propose a deep learning approach to learn gradient vectors with consistent orientation from 3D point clouds for normal estimation.
We learn an angular distance field based on local plane geometry to refine the coarse gradient vectors.
Our method efficiently conducts global gradient approximation while achieving better accuracy and ability generalization of local feature description.
arXiv Detail & Related papers (2023-09-17T08:35:11Z) - Surrogate Neural Networks for Efficient Simulation-based Trajectory
Planning Optimization [28.292234483886947]
This paper presents a novel methodology that uses surrogate models in the form of neural networks to reduce the computation time of simulation-based optimization of a reference trajectory.
We find a 74% better-performing reference trajectory compared to nominal, and the numerical results clearly show a substantial reduction in computation time for designing future trajectories.
arXiv Detail & Related papers (2023-03-30T15:44:30Z) - Efficient Differentiable Simulation of Articulated Bodies [89.64118042429287]
We present a method for efficient differentiable simulation of articulated bodies.
This enables integration of articulated body dynamics into deep learning frameworks.
We show that reinforcement learning with articulated systems can be accelerated using gradients provided by our method.
arXiv Detail & Related papers (2021-09-16T04:48:13Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Channel-Directed Gradients for Optimization of Convolutional Neural
Networks [50.34913837546743]
We introduce optimization methods for convolutional neural networks that can be used to improve existing gradient-based optimization in terms of generalization error.
We show that defining the gradients along the output channel direction leads to a performance boost, while other directions can be detrimental.
arXiv Detail & Related papers (2020-08-25T00:44:09Z) - A Primer on Zeroth-Order Optimization in Signal Processing and Machine
Learning [95.85269649177336]
ZO optimization iteratively performs three major steps: gradient estimation, descent direction, and solution update.
We demonstrate promising applications of ZO optimization, such as evaluating and generating explanations from black-box deep learning models, and efficient online sensor management.
arXiv Detail & Related papers (2020-06-11T06:50:35Z) - Black-Box Optimization with Local Generative Surrogates [6.04055755718349]
In fields such as physics and engineering, many processes are modeled with non-differentiable simulators with intractable likelihoods.
We introduce the use of deep generative models to approximate the simulator in local neighborhoods of the parameter space.
In cases where the dependence of the simulator on the parameter space is constrained to a low dimensional submanifold, we observe that our method attains minima faster than baseline methods.
arXiv Detail & Related papers (2020-02-11T19:02:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.