VehicleSDF: A 3D generative model for constrained engineering design via surrogate modeling
- URL: http://arxiv.org/abs/2410.18986v1
- Date: Wed, 09 Oct 2024 16:59:24 GMT
- Title: VehicleSDF: A 3D generative model for constrained engineering design via surrogate modeling
- Authors: Hayata Morita, Kohei Shintani, Chenyang Yuan, Frank Permenter,
- Abstract summary: This work explores the use of 3D generative models to explore the design space in the context of vehicle development.
We generate diverse 3D models of cars that meet a given set of geometric specifications.
We also obtain quick estimates of performance parameters such as aerodynamic drag.
- Score: 3.746111274696241
- License:
- Abstract: A main challenge in mechanical design is to efficiently explore the design space while satisfying engineering constraints. This work explores the use of 3D generative models to explore the design space in the context of vehicle development, while estimating and enforcing engineering constraints. Specifically, we generate diverse 3D models of cars that meet a given set of geometric specifications, while also obtaining quick estimates of performance parameters such as aerodynamic drag. For this, we employ a data-driven approach (using the ShapeNet dataset) to train VehicleSDF, a DeepSDF based model that represents potential designs in a latent space witch can be decoded into a 3D model. We then train surrogate models to estimate engineering parameters from this latent space representation, enabling us to efficiently optimize latent vectors to match specifications. Our experiments show that we can generate diverse 3D models while matching the specified geometric parameters. Finally, we demonstrate that other performance parameters such as aerodynamic drag can be estimated in a differentiable pipeline.
Related papers
- Bayesian Mesh Optimization for Graph Neural Networks to Enhance Engineering Performance Prediction [1.6574413179773761]
In engineering design, surrogate models are widely employed to replace computationally expensive simulations.
We propose a Bayesian graph neural network (GNN) framework for a 3D deep-learning-based surrogate model.
Our framework determines the optimal size of mesh elements through Bayesian optimization, resulting in a high-accuracy surrogate model.
arXiv Detail & Related papers (2024-06-04T06:27:48Z) - Pushing Auto-regressive Models for 3D Shape Generation at Capacity and Scalability [118.26563926533517]
Auto-regressive models have achieved impressive results in 2D image generation by modeling joint distributions in grid space.
We extend auto-regressive models to 3D domains, and seek a stronger ability of 3D shape generation by improving auto-regressive models at capacity and scalability simultaneously.
arXiv Detail & Related papers (2024-02-19T15:33:09Z) - Compositional Generative Inverse Design [69.22782875567547]
Inverse design, where we seek to design input variables in order to optimize an underlying objective function, is an important problem.
We show that by instead optimizing over the learned energy function captured by the diffusion model, we can avoid such adversarial examples.
In an N-body interaction task and a challenging 2D multi-airfoil design task, we demonstrate that by composing the learned diffusion model at test time, our method allows us to design initial states and boundary shapes.
arXiv Detail & Related papers (2024-01-24T01:33:39Z) - Weighted Unsupervised Domain Adaptation Considering Geometry Features
and Engineering Performance of 3D Design Data [2.306144660547256]
We propose a bi-weighted unsupervised domain adaptation approach that considers the geometry features and engineering performance of 3D design data.
The proposed model is tested on a wheel impact analysis problem to predict the magnitude of the maximum von Mises stress and the corresponding location of 3D road wheels.
arXiv Detail & Related papers (2023-09-08T00:26:44Z) - Pushing the Limits of 3D Shape Generation at Scale [65.24420181727615]
We present a significant breakthrough in 3D shape generation by scaling it to unprecedented dimensions.
We have developed a model with an astounding 3.6 billion trainable parameters, establishing it as the largest 3D shape generation model to date, named Argus-3D.
arXiv Detail & Related papers (2023-06-20T13:01:19Z) - AircraftVerse: A Large-Scale Multimodal Dataset of Aerial Vehicle
Designs [15.169540193173923]
AircraftVerse contains 27,714 diverse air vehicle designs.
Each design comprises the following artifacts: a symbolic design tree describing topology propulsion subsystem, battery subsystem, and design details.
We present baseline surrogate models that use different modalities of design representation to predict design performance metrics.
arXiv Detail & Related papers (2023-06-08T21:07:15Z) - Surrogate Modeling of Car Drag Coefficient with Depth and Normal
Renderings [4.868319717279586]
We propose a new two-dimensional (2D) representation of 3D shapes to verify its effectiveness in predicting 3D car drag.
We construct a diverse dataset of 9,070 high-quality 3D car meshes labeled by drag coefficients.
Our experiments demonstrate that our model can accurately and efficiently evaluate drag coefficients with an $R2$ value above 0.84 for various car categories.
arXiv Detail & Related papers (2023-05-26T09:33:12Z) - Automatic Parameterization for Aerodynamic Shape Optimization via Deep
Geometric Learning [60.69217130006758]
We propose two deep learning models that fully automate shape parameterization for aerodynamic shape optimization.
Both models are optimized to parameterize via deep geometric learning to embed human prior knowledge into learned geometric patterns.
We perform shape optimization experiments on 2D airfoils and discuss the applicable scenarios for the two models.
arXiv Detail & Related papers (2023-05-03T13:45:40Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Learning Versatile 3D Shape Generation with Improved AR Models [91.87115744375052]
Auto-regressive (AR) models have achieved impressive results in 2D image generation by modeling joint distributions in the grid space.
We propose the Improved Auto-regressive Model (ImAM) for 3D shape generation, which applies discrete representation learning based on a latent vector instead of volumetric grids.
arXiv Detail & Related papers (2023-03-26T12:03:18Z) - Investigation of Physics-Informed Deep Learning for the Prediction of
Parametric, Three-Dimensional Flow Based on Boundary Data [0.0]
We present a parameterized surrogate model for the prediction of three-dimensional flow fields in aerothermal vehicle simulations.
The proposed physics-informed neural network (PINN) design is aimed at learning families of flow solutions according to a geometric variation.
arXiv Detail & Related papers (2022-03-17T09:54:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.