Surrogate Neural Networks for Efficient Simulation-based Trajectory
Planning Optimization
- URL: http://arxiv.org/abs/2303.17468v1
- Date: Thu, 30 Mar 2023 15:44:30 GMT
- Title: Surrogate Neural Networks for Efficient Simulation-based Trajectory
Planning Optimization
- Authors: Evelyn Ruff, Rebecca Russell, Matthew Stoeckle, Piero Miotto, and
Jonathan P. How
- Abstract summary: This paper presents a novel methodology that uses surrogate models in the form of neural networks to reduce the computation time of simulation-based optimization of a reference trajectory.
We find a 74% better-performing reference trajectory compared to nominal, and the numerical results clearly show a substantial reduction in computation time for designing future trajectories.
- Score: 28.292234483886947
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a novel methodology that uses surrogate models in the
form of neural networks to reduce the computation time of simulation-based
optimization of a reference trajectory. Simulation-based optimization is
necessary when there is no analytical form of the system accessible, only
input-output data that can be used to create a surrogate model of the
simulation. Like many high-fidelity simulations, this trajectory planning
simulation is very nonlinear and computationally expensive, making it
challenging to optimize iteratively. Through gradient descent optimization, our
approach finds the optimal reference trajectory for landing a hypersonic
vehicle. In contrast to the large datasets used to create the surrogate models
in prior literature, our methodology is specifically designed to minimize the
number of simulation executions required by the gradient descent optimizer. We
demonstrated this methodology to be more efficient than the standard practice
of hand-tuning the inputs through trial-and-error or randomly sampling the
input parameter space. Due to the intelligently selected input values to the
simulation, our approach yields better simulation outcomes that are achieved
more rapidly and to a higher degree of accuracy. Optimizing the hypersonic
vehicle's reference trajectory is very challenging due to the simulation's
extreme nonlinearity, but even so, this novel approach found a 74%
better-performing reference trajectory compared to nominal, and the numerical
results clearly show a substantial reduction in computation time for designing
future trajectories.
Related papers
- Accelerate Neural Subspace-Based Reduced-Order Solver of Deformable Simulation by Lipschitz Optimization [9.364019847856714]
Reduced-order simulation is an emerging method for accelerating physical simulations with high DOFs.
We propose a method for finding optimized subspace mappings, enabling further acceleration of neural reduced-order simulations.
We demonstrate the effectiveness of our approach through general cases in both quasi-static and dynamics simulations.
arXiv Detail & Related papers (2024-09-05T12:56:03Z) - Gradual Optimization Learning for Conformational Energy Minimization [69.36925478047682]
Gradual Optimization Learning Framework (GOLF) for energy minimization with neural networks significantly reduces the required additional data.
Our results demonstrate that the neural network trained with GOLF performs on par with the oracle on a benchmark of diverse drug-like molecules.
arXiv Detail & Related papers (2023-11-05T11:48:08Z) - Neural Posterior Estimation with Differentiable Simulators [58.720142291102135]
We present a new method to perform Neural Posterior Estimation (NPE) with a differentiable simulator.
We demonstrate how gradient information helps constrain the shape of the posterior and improves sample-efficiency.
arXiv Detail & Related papers (2022-07-12T16:08:04Z) - Data-driven evolutionary algorithm for oil reservoir well-placement and
control optimization [3.012067935276772]
Generalized data-driven evolutionary algorithm (GDDE) is proposed to reduce the number of simulation runs on well-placement and control optimization problems.
Probabilistic neural network (PNN) is adopted as the classifier to select informative and promising candidates.
arXiv Detail & Related papers (2022-06-07T09:07:49Z) - Data-Driven Offline Optimization For Architecting Hardware Accelerators [89.68870139177785]
We develop a data-driven offline optimization method for designing hardware accelerators, dubbed PRIME.
PRIME improves performance upon state-of-the-art simulation-driven methods by about 1.54x and 1.20x, while considerably reducing the required total simulation time by 93% and 99%, respectively.
In addition, PRIME also architects effective accelerators for unseen applications in a zero-shot setting, outperforming simulation-based methods by 1.26x.
arXiv Detail & Related papers (2021-10-20T17:06:09Z) - Differentiable Agent-Based Simulation for Gradient-Guided
Simulation-Based Optimization [0.0]
gradient estimation methods can be used to steer the optimization towards a local optimum.
In traffic signal timing optimization problems with high input dimension, the gradient-based methods exhibit substantially superior performance.
arXiv Detail & Related papers (2021-03-23T11:58:21Z) - Application of an automated machine learning-genetic algorithm
(AutoML-GA) coupled with computational fluid dynamics simulations for rapid
engine design optimization [0.0]
The present work describes and validates an automated active learning approach, AutoML-GA, for surrogate-based optimization of internal combustion engines.
A genetic algorithm is employed to locate the design optimum on the machine learning surrogate surface.
It is demonstrated that AutoML-GA leads to a better optimum with a lower number of CFD simulations.
arXiv Detail & Related papers (2021-01-07T17:50:52Z) - An AI-Assisted Design Method for Topology Optimization Without
Pre-Optimized Training Data [68.8204255655161]
An AI-assisted design method based on topology optimization is presented, which is able to obtain optimized designs in a direct way.
Designs are provided by an artificial neural network, the predictor, on the basis of boundary conditions and degree of filling as input data.
arXiv Detail & Related papers (2020-12-11T14:33:27Z) - AutoSimulate: (Quickly) Learning Synthetic Data Generation [70.82315853981838]
We propose an efficient alternative for optimal synthetic data generation based on a novel differentiable approximation of the objective.
We demonstrate that the proposed method finds the optimal data distribution faster (up to $50times$), with significantly reduced training data generation (up to $30times$) and better accuracy ($+8.7%$) on real-world test datasets than previous methods.
arXiv Detail & Related papers (2020-08-16T11:36:11Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z) - Black-Box Optimization with Local Generative Surrogates [6.04055755718349]
In fields such as physics and engineering, many processes are modeled with non-differentiable simulators with intractable likelihoods.
We introduce the use of deep generative models to approximate the simulator in local neighborhoods of the parameter space.
In cases where the dependence of the simulator on the parameter space is constrained to a low dimensional submanifold, we observe that our method attains minima faster than baseline methods.
arXiv Detail & Related papers (2020-02-11T19:02:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.