Surrogate Modeling of Car Drag Coefficient with Depth and Normal
Renderings
- URL: http://arxiv.org/abs/2306.06110v1
- Date: Fri, 26 May 2023 09:33:12 GMT
- Title: Surrogate Modeling of Car Drag Coefficient with Depth and Normal
Renderings
- Authors: Binyang Song, Chenyang Yuan, Frank Permenter, Nikos Arechiga, Faez
Ahmed
- Abstract summary: We propose a new two-dimensional (2D) representation of 3D shapes to verify its effectiveness in predicting 3D car drag.
We construct a diverse dataset of 9,070 high-quality 3D car meshes labeled by drag coefficients.
Our experiments demonstrate that our model can accurately and efficiently evaluate drag coefficients with an $R2$ value above 0.84 for various car categories.
- Score: 4.868319717279586
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative AI models have made significant progress in automating the
creation of 3D shapes, which has the potential to transform car design. In
engineering design and optimization, evaluating engineering metrics is crucial.
To make generative models performance-aware and enable them to create
high-performing designs, surrogate modeling of these metrics is necessary.
However, the currently used representations of three-dimensional (3D) shapes
either require extensive computational resources to learn or suffer from
significant information loss, which impairs their effectiveness in surrogate
modeling. To address this issue, we propose a new two-dimensional (2D)
representation of 3D shapes. We develop a surrogate drag model based on this
representation to verify its effectiveness in predicting 3D car drag. We
construct a diverse dataset of 9,070 high-quality 3D car meshes labeled by drag
coefficients computed from computational fluid dynamics (CFD) simulations to
train our model. Our experiments demonstrate that our model can accurately and
efficiently evaluate drag coefficients with an $R^2$ value above 0.84 for
various car categories. Moreover, the proposed representation method can be
generalized to many other product categories beyond cars. Our model is
implemented using deep neural networks, making it compatible with recent AI
image generation tools (such as Stable Diffusion) and a significant step
towards the automatic generation of drag-optimized car designs. We have made
the dataset and code publicly available at
https://decode.mit.edu/projects/dragprediction/.
Related papers
- VehicleSDF: A 3D generative model for constrained engineering design via surrogate modeling [3.746111274696241]
This work explores the use of 3D generative models to explore the design space in the context of vehicle development.
We generate diverse 3D models of cars that meet a given set of geometric specifications.
We also obtain quick estimates of performance parameters such as aerodynamic drag.
arXiv Detail & Related papers (2024-10-09T16:59:24Z) - Bayesian Mesh Optimization for Graph Neural Networks to Enhance Engineering Performance Prediction [1.6574413179773761]
In engineering design, surrogate models are widely employed to replace computationally expensive simulations.
We propose a Bayesian graph neural network (GNN) framework for a 3D deep-learning-based surrogate model.
Our framework determines the optimal size of mesh elements through Bayesian optimization, resulting in a high-accuracy surrogate model.
arXiv Detail & Related papers (2024-06-04T06:27:48Z) - Pushing Auto-regressive Models for 3D Shape Generation at Capacity and Scalability [118.26563926533517]
Auto-regressive models have achieved impressive results in 2D image generation by modeling joint distributions in grid space.
We extend auto-regressive models to 3D domains, and seek a stronger ability of 3D shape generation by improving auto-regressive models at capacity and scalability simultaneously.
arXiv Detail & Related papers (2024-02-19T15:33:09Z) - Deep Generative Models on 3D Representations: A Survey [81.73385191402419]
Generative models aim to learn the distribution of observed data by generating new instances.
Recently, researchers started to shift focus from 2D to 3D space.
representing 3D data poses significantly greater challenges.
arXiv Detail & Related papers (2022-10-27T17:59:50Z) - Secrets of 3D Implicit Object Shape Reconstruction in the Wild [92.5554695397653]
Reconstructing high-fidelity 3D objects from sparse, partial observation is crucial for various applications in computer vision, robotics, and graphics.
Recent neural implicit modeling methods show promising results on synthetic or dense datasets.
But, they perform poorly on real-world data that is sparse and noisy.
This paper analyzes the root cause of such deficient performance of a popular neural implicit model.
arXiv Detail & Related papers (2021-01-18T03:24:48Z) - Generative VoxelNet: Learning Energy-Based Models for 3D Shape Synthesis
and Analysis [143.22192229456306]
This paper proposes a deep 3D energy-based model to represent volumetric shapes.
The benefits of the proposed model are six-fold.
Experiments demonstrate that the proposed model can generate high-quality 3D shape patterns.
arXiv Detail & Related papers (2020-12-25T06:09:36Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z) - PerMO: Perceiving More at Once from a Single Image for Autonomous
Driving [76.35684439949094]
We present a novel approach to detect, segment, and reconstruct complete textured 3D models of vehicles from a single image.
Our approach combines the strengths of deep learning and the elegance of traditional techniques.
We have integrated these algorithms with an autonomous driving system.
arXiv Detail & Related papers (2020-07-16T05:02:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.