DEBOSH: Deep Bayesian Shape Optimization
- URL: http://arxiv.org/abs/2109.13337v2
- Date: Mon, 2 Oct 2023 17:04:17 GMT
- Title: DEBOSH: Deep Bayesian Shape Optimization
- Authors: Nikita Durasov, Artem Lukoyanov, Jonathan Donier, Pascal Fua
- Abstract summary: We propose a novel uncertainty-based method tailored to shape optimization.
It enables effective BO and increases the quality of the resulting shapes beyond that of state-of-the-art approaches.
- Score: 48.80431740983095
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) can predict the performance of an industrial
design quickly and accurately and be used to optimize its shape effectively.
However, to fully explore the shape space, one must often consider shapes
deviating significantly from the training set. For these, GNN predictions
become unreliable, something that is often ignored. For optimization techniques
relying on Gaussian Processes, Bayesian Optimization (BO) addresses this issue
by exploiting their ability to assess their own accuracy. Unfortunately, this
is harder to do when using neural networks because standard approaches to
estimating their uncertainty can entail high computational loads and reduced
model accuracy. Hence, we propose a novel uncertainty-based method tailored to
shape optimization. It enables effective BO and increases the quality of the
resulting shapes beyond that of state-of-the-art approaches.
Related papers
- Understanding Optimization in Deep Learning with Central Flows [53.66160508990508]
We show that an RMS's implicit behavior can be explicitly captured by a "central flow:" a differential equation.
We show that these flows can empirically predict long-term optimization trajectories of generic neural networks.
arXiv Detail & Related papers (2024-10-31T17:58:13Z) - Simmering: Sufficient is better than optimal for training neural networks [0.0]
We introduce simmering, a physics-based method that trains neural networks to generate weights and biases that are merely good enough''
We show that simmering corrects neural networks that are overfit by Adam, and show that simmering avoids overfitting if deployed from the outset.
Our results question optimization as a paradigm for neural network training, and leverage information-geometric arguments to point to the existence of classes of sufficient training algorithms.
arXiv Detail & Related papers (2024-10-25T18:02:08Z) - Jacobian-Enhanced Neural Networks [0.0]
Jacobian-Enhanced Neural Networks (JENN) are densely connected multi-layer perceptrons.
JENN's main benefit is better accuracy with fewer training points compared to standard neural networks.
arXiv Detail & Related papers (2024-06-13T14:04:34Z) - Achieving Constraints in Neural Networks: A Stochastic Augmented
Lagrangian Approach [49.1574468325115]
Regularizing Deep Neural Networks (DNNs) is essential for improving generalizability and preventing overfitting.
We propose a novel approach to DNN regularization by framing the training process as a constrained optimization problem.
We employ the Augmented Lagrangian (SAL) method to achieve a more flexible and efficient regularization mechanism.
arXiv Detail & Related papers (2023-10-25T13:55:35Z) - Recurrent Bilinear Optimization for Binary Neural Networks [58.972212365275595]
BNNs neglect the intrinsic bilinear relationship of real-valued weights and scale factors.
Our work is the first attempt to optimize BNNs from the bilinear perspective.
We obtain robust RBONNs, which show impressive performance over state-of-the-art BNNs on various models and datasets.
arXiv Detail & Related papers (2022-09-04T06:45:33Z) - EXACT: How to Train Your Accuracy [6.144680854063938]
We propose a new optimization framework by introducing ascentity to a model's output and optimizing expected accuracy.
Experiments on linear models and deep image classification show that the proposed optimization method is a powerful alternative to widely used classification losses.
arXiv Detail & Related papers (2022-05-19T15:13:00Z) - Bayesian Optimization Meets Laplace Approximation for Robotic
Introspection [41.117361086267806]
We introduce a scalable Laplace Approximation (LA) technique to make Deep Neural Networks (DNNs) more introspective.
In particular, we propose a novel Bayesian Optimization (BO) algorithm to mitigate their tendency of under-fitting the true weight posterior.
We show that the proposed framework can be scaled up to large datasets and architectures.
arXiv Detail & Related papers (2020-10-30T09:28:10Z) - Enhanced data efficiency using deep neural networks and Gaussian
processes for aerodynamic design optimization [0.0]
Adjoint-based optimization methods are attractive for aerodynamic shape design.
They can become prohibitively expensive when multiple optimization problems are being solved.
We propose a machine learning enabled, surrogate-based framework that replaces the expensive adjoint solver.
arXiv Detail & Related papers (2020-08-15T15:09:21Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.