DrivAerNet: A Parametric Car Dataset for Data-Driven Aerodynamic Design
and Graph-Based Drag Prediction
- URL: http://arxiv.org/abs/2403.08055v1
- Date: Tue, 12 Mar 2024 20:02:39 GMT
- Title: DrivAerNet: A Parametric Car Dataset for Data-Driven Aerodynamic Design
and Graph-Based Drag Prediction
- Authors: Mohamed Elrefaie, Angela Dai, Faez Ahmed
- Abstract summary: This study introduces DrivAerNet, a large-scale high-fidelity CFD dataset of 3D industry-standard car shapes, and RegDGCNN, a dynamic graph convolutional neural network model.
Together, DrivAerNet and RegDGCNN promise to accelerate the car design process and contribute to the development of more efficient vehicles.
- Score: 30.697742505713254
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study introduces DrivAerNet, a large-scale high-fidelity CFD dataset of
3D industry-standard car shapes, and RegDGCNN, a dynamic graph convolutional
neural network model, both aimed at aerodynamic car design through machine
learning. DrivAerNet, with its 4000 detailed 3D car meshes using 0.5 million
surface mesh faces and comprehensive aerodynamic performance data comprising of
full 3D pressure, velocity fields, and wall-shear stresses, addresses the
critical need for extensive datasets to train deep learning models in
engineering applications. It is 60\% larger than the previously available
largest public dataset of cars, and is the only open-source dataset that also
models wheels and underbody. RegDGCNN leverages this large-scale dataset to
provide high-precision drag estimates directly from 3D meshes, bypassing
traditional limitations such as the need for 2D image rendering or Signed
Distance Fields (SDF). By enabling fast drag estimation in seconds, RegDGCNN
facilitates rapid aerodynamic assessments, offering a substantial leap towards
integrating data-driven methods in automotive design. Together, DrivAerNet and
RegDGCNN promise to accelerate the car design process and contribute to the
development of more efficient vehicles. To lay the groundwork for future
innovations in the field, the dataset and code used in our study are publicly
accessible at \url{https://github.com/Mohamedelrefaie/DrivAerNet}
Related papers
- VECTOR: Velocity-Enhanced GRU Neural Network for Real-Time 3D UAV Trajectory Prediction [2.1825723033513165]
We propose a new trajectory prediction method using Gated Recurrent Units (GRUs) within sequence-based neural networks.
We employ both synthetic and real-world 3D UAV trajectory data, capturing a wide range of flight patterns, speeds, and agility.
The GRU-based models significantly outperform state-of-the-art RNN approaches, with a mean square error (MSE) as low as 2 x 10-8.
arXiv Detail & Related papers (2024-10-24T07:16:42Z) - DrivAerNet++: A Large-Scale Multimodal Car Dataset with Computational Fluid Dynamics Simulations and Deep Learning Benchmarks [25.00264553520033]
DrivAerNet++ comprises 8,000 diverse car designs modeled with high-fidelity computational fluid dynamics (CFD) simulations.
The dataset includes diverse car configurations such as fastback, notchback, and estateback, with different underbody and wheel designs to represent both internal combustion engines and electric vehicles.
This dataset supports a wide array of machine learning applications including data-driven design optimization, generative modeling, surrogate model training, CFD simulation acceleration, and geometric classification.
arXiv Detail & Related papers (2024-06-13T23:19:48Z) - DriveWorld: 4D Pre-trained Scene Understanding via World Models for Autonomous Driving [67.46481099962088]
Current vision-centric pre-training typically relies on either 2D or 3D pre-text tasks, overlooking the temporal characteristics of autonomous driving as a 4D scene understanding task.
We introduce emphcentricDriveWorld, which is capable of pre-training from multi-camera driving videos in atemporal fashion.
DriveWorld delivers promising results on various autonomous driving tasks.
arXiv Detail & Related papers (2024-05-07T15:14:20Z) - TrajectoryNAS: A Neural Architecture Search for Trajectory Prediction [0.0]
Trajectory prediction is a critical component of autonomous driving systems.
This paper introduces TrajectoryNAS, a pioneering method that focuses on utilizing point cloud data for trajectory prediction.
arXiv Detail & Related papers (2024-03-18T11:48:41Z) - AutoSynth: Learning to Generate 3D Training Data for Object Point Cloud
Registration [69.21282992341007]
Auto Synth automatically generates 3D training data for point cloud registration.
We replace the point cloud registration network with a much smaller surrogate network, leading to a $4056.43$ speedup.
Our results on TUD-L, LINEMOD and Occluded-LINEMOD evidence that a neural network trained on our searched dataset yields consistently better performance than the same one trained on the widely used ModelNet40 dataset.
arXiv Detail & Related papers (2023-09-20T09:29:44Z) - Pre-training on Synthetic Driving Data for Trajectory Prediction [61.520225216107306]
We propose a pipeline-level solution to mitigate the issue of data scarcity in trajectory forecasting.
We adopt HD map augmentation and trajectory synthesis for generating driving data, and then we learn representations by pre-training on them.
We conduct extensive experiments to demonstrate the effectiveness of our data expansion and pre-training strategies.
arXiv Detail & Related papers (2023-09-18T19:49:22Z) - Surrogate Modeling of Car Drag Coefficient with Depth and Normal
Renderings [4.868319717279586]
We propose a new two-dimensional (2D) representation of 3D shapes to verify its effectiveness in predicting 3D car drag.
We construct a diverse dataset of 9,070 high-quality 3D car meshes labeled by drag coefficients.
Our experiments demonstrate that our model can accurately and efficiently evaluate drag coefficients with an $R2$ value above 0.84 for various car categories.
arXiv Detail & Related papers (2023-05-26T09:33:12Z) - DL-Traff: Survey and Benchmark of Deep Learning Models for Urban Traffic
Prediction [7.476566278759198]
By leveraging state-of-the-art deep learning technologies on such data, urban traffic prediction has drawn a lot attention in AI and Intelligent Transportation System community.
According to the specific modeling strategy, the state-of-the-art deep learning models can be divided into three categories: grid-based, graph-based, and time-series models.
arXiv Detail & Related papers (2021-08-20T10:08:26Z) - PC-DAN: Point Cloud based Deep Affinity Network for 3D Multi-Object
Tracking (Accepted as an extended abstract in JRDB-ACT Workshop at CVPR21) [68.12101204123422]
A point cloud is a dense compilation of spatial data in 3D coordinates.
We propose a PointNet-based approach for 3D Multi-Object Tracking (MOT)
arXiv Detail & Related papers (2021-06-03T05:36:39Z) - Learnable Online Graph Representations for 3D Multi-Object Tracking [156.58876381318402]
We propose a unified and learning based approach to the 3D MOT problem.
We employ a Neural Message Passing network for data association that is fully trainable.
We show the merit of the proposed approach on the publicly available nuScenes dataset by achieving state-of-the-art performance of 65.6% AMOTA and 58% fewer ID-switches.
arXiv Detail & Related papers (2021-04-23T17:59:28Z) - InfoFocus: 3D Object Detection for Autonomous Driving with Dynamic
Information Modeling [65.47126868838836]
We propose a novel 3D object detection framework with dynamic information modeling.
Coarse predictions are generated in the first stage via a voxel-based region proposal network.
Experiments are conducted on the large-scale nuScenes 3D detection benchmark.
arXiv Detail & Related papers (2020-07-16T18:27:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.