DeepWheel: Generating a 3D Synthetic Wheel Dataset for Design and Performance Evaluation
- URL: http://arxiv.org/abs/2504.11347v2
- Date: Wed, 16 Apr 2025 04:26:29 GMT
- Title: DeepWheel: Generating a 3D Synthetic Wheel Dataset for Design and Performance Evaluation
- Authors: Soyoung Yoo, Namwoo Kang,
- Abstract summary: This study proposes a synthetic design-performance dataset generation framework using generative AI.<n>The framework first generates 2D rendered images using Stable Diffusion, and then reconstructs the 3D geometry through 2.5D depth estimation.<n>The final dataset, named DeepWheel, consists of over 6,000 photo-realistic images and 900 structurally analyzed 3D models.
- Score: 3.3148826359547523
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Data-driven design is emerging as a powerful strategy to accelerate engineering innovation. However, its application to vehicle wheel design remains limited due to the lack of large-scale, high-quality datasets that include 3D geometry and physical performance metrics. To address this gap, this study proposes a synthetic design-performance dataset generation framework using generative AI. The proposed framework first generates 2D rendered images using Stable Diffusion, and then reconstructs the 3D geometry through 2.5D depth estimation. Structural simulations are subsequently performed to extract engineering performance data. To further expand the design and performance space, topology optimization is applied, enabling the generation of a more diverse set of wheel designs. The final dataset, named DeepWheel, consists of over 6,000 photo-realistic images and 900 structurally analyzed 3D models. This multi-modal dataset serves as a valuable resource for surrogate model training, data-driven inverse design, and design space exploration. The proposed methodology is also applicable to other complex design domains. The dataset is released under the Creative Commons Attribution-NonCommercial 4.0 International(CC BY-NC 4.0) and is available on the https://www.smartdesignlab.org/datasets
Related papers
- CULTURE3D: Cultural Landmarks and Terrain Dataset for 3D Applications [11.486451047360248]
We present a large-scale fine-grained dataset using high-resolution images captured from locations worldwide.<n>Our dataset is built using drone-captured aerial imagery, which provides a more accurate perspective for capturing real-world site layouts and architectural structures.<n>The dataset enables seamless integration with multi-modal data, supporting a range of 3D applications, from architectural reconstruction to virtual tourism.
arXiv Detail & Related papers (2025-01-12T20:36:39Z) - MegaSynth: Scaling Up 3D Scene Reconstruction with Synthesized Data [59.88075377088134]
We propose scaling up 3D scene reconstruction by training with synthesized data.<n>At the core of our work is Mega Synth, a procedurally generated 3D dataset comprising 700K scenes.<n>Experiment results show that joint training or pre-training with Mega Synth improves reconstruction quality by 1.2 to 1.8 dB PSNR across diverse image domains.
arXiv Detail & Related papers (2024-12-18T18:59:38Z) - Textured Mesh Saliency: Bridging Geometry and Texture for Human Perception in 3D Graphics [50.23625950905638]
We present a new dataset for textured mesh saliency, created through an innovative eye-tracking experiment in a six degrees of freedom (6-DOF) VR environment.<n>Our proposed model predicts saliency maps for textured mesh surfaces by treating each triangular face as an individual unit and assigning a saliency density value to reflect the importance of each local surface region.
arXiv Detail & Related papers (2024-12-11T08:27:33Z) - Open-Vocabulary High-Resolution 3D (OVHR3D) Data Segmentation and Annotation Framework [1.1280113914145702]
This research aims to design and develop a comprehensive and efficient framework for 3D segmentation tasks.<n>The framework integrates Grounding DINO and Segment anything Model, augmented by an enhancement in 2D image rendering via 3D mesh.
arXiv Detail & Related papers (2024-12-09T07:39:39Z) - VehicleSDF: A 3D generative model for constrained engineering design via surrogate modeling [3.746111274696241]
This work explores the use of 3D generative models to explore the design space in the context of vehicle development.
We generate diverse 3D models of cars that meet a given set of geometric specifications.
We also obtain quick estimates of performance parameters such as aerodynamic drag.
arXiv Detail & Related papers (2024-10-09T16:59:24Z) - Zero123-6D: Zero-shot Novel View Synthesis for RGB Category-level 6D Pose Estimation [66.3814684757376]
This work presents Zero123-6D, the first work to demonstrate the utility of Diffusion Model-based novel-view-synthesizers in enhancing RGB 6D pose estimation at category-level.
The outlined method shows reduction in data requirements, removal of the necessity of depth information in zero-shot category-level 6D pose estimation task, and increased performance, quantitatively demonstrated through experiments on the CO3D dataset.
arXiv Detail & Related papers (2024-03-21T10:38:18Z) - UniG3D: A Unified 3D Object Generation Dataset [75.49544172927749]
UniG3D is a unified 3D object generation dataset constructed by employing a universal data transformation pipeline on ShapeNet datasets.
This pipeline converts each raw 3D model into comprehensive multi-modal data representation.
The selection of data sources for our dataset is based on their scale and quality.
arXiv Detail & Related papers (2023-06-19T07:03:45Z) - Surrogate Modeling of Car Drag Coefficient with Depth and Normal
Renderings [4.868319717279586]
We propose a new two-dimensional (2D) representation of 3D shapes to verify its effectiveness in predicting 3D car drag.
We construct a diverse dataset of 9,070 high-quality 3D car meshes labeled by drag coefficients.
Our experiments demonstrate that our model can accurately and efficiently evaluate drag coefficients with an $R2$ value above 0.84 for various car categories.
arXiv Detail & Related papers (2023-05-26T09:33:12Z) - Unsupervised Learning of 3D Object Categories from Videos in the Wild [75.09720013151247]
We focus on learning a model from multiple views of a large collection of object instances.
We propose a new neural network design, called warp-conditioned ray embedding (WCR), which significantly improves reconstruction.
Our evaluation demonstrates performance improvements over several deep monocular reconstruction baselines on existing benchmarks.
arXiv Detail & Related papers (2021-03-30T17:57:01Z) - Secrets of 3D Implicit Object Shape Reconstruction in the Wild [92.5554695397653]
Reconstructing high-fidelity 3D objects from sparse, partial observation is crucial for various applications in computer vision, robotics, and graphics.
Recent neural implicit modeling methods show promising results on synthetic or dense datasets.
But, they perform poorly on real-world data that is sparse and noisy.
This paper analyzes the root cause of such deficient performance of a popular neural implicit model.
arXiv Detail & Related papers (2021-01-18T03:24:48Z) - 3DMaterialGAN: Learning 3D Shape Representation from Latent Space for
Materials Science Applications [7.449993399792031]
3DMaterialGAN is capable of recognizing and synthesizing individual grains whose morphology conforms to a given 3D polycrystalline material microstructure.
We show that this method performs comparably or better than state-of-the-art on benchmark annotated 3D datasets.
This framework lays the foundation for the recognition and synthesis of polycrystalline material microstructures.
arXiv Detail & Related papers (2020-07-27T21:55:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.