Enhancing Vehicle Aerodynamics with Deep Reinforcement Learning in Voxelised Models
- URL: http://arxiv.org/abs/2405.11492v1
- Date: Sun, 19 May 2024 09:19:31 GMT
- Title: Enhancing Vehicle Aerodynamics with Deep Reinforcement Learning in Voxelised Models
- Authors: Jignesh Patel, Yannis Spyridis, Vasileios Argyriou,
- Abstract summary: This paper presents a novel approach for aerodynamic optimisation in car design using deep reinforcement learning (DRL)
The proposed approach uses voxelised models to discretise the vehicle geometry into a grid of voxels, allowing for a detailed representation of the aerodynamic flow field.
Experimental results demonstrate the effectiveness and efficiency of the proposed approach in achieving significant results in aerodynamic performance.
- Score: 6.16808916207942
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Aerodynamic design optimisation plays a crucial role in improving the performance and efficiency of automotive vehicles. This paper presents a novel approach for aerodynamic optimisation in car design using deep reinforcement learning (DRL). Traditional optimisation methods often face challenges in handling the complexity of the design space and capturing non-linear relationships between design parameters and aerodynamic performance metrics. This study addresses these challenges by employing DRL to learn optimal aerodynamic design strategies in a voxelised model representation. The proposed approach utilises voxelised models to discretise the vehicle geometry into a grid of voxels, allowing for a detailed representation of the aerodynamic flow field. The Proximal Policy Optimisation (PPO) algorithm is then employed to train a DRL agent to optimise the design parameters of the vehicle with respect to drag force, kinetic energy, and voxel collision count. Experimental results demonstrate the effectiveness and efficiency of the proposed approach in achieving significant results in aerodynamic performance. The findings highlight the potential of DRL techniques for addressing complex aerodynamic design optimisation problems in automotive engineering, with implications for improving vehicle performance, fuel efficiency, and environmental sustainability.
Related papers
- Diffusion Models as Optimizers for Efficient Planning in Offline RL [47.0835433289033]
Diffusion models have shown strong competitiveness in offline reinforcement learning tasks.
We propose a faster autoregressive model to handle the generation of feasible trajectories.
This allows us to achieve more efficient planning without sacrificing capability.
arXiv Detail & Related papers (2024-07-23T03:00:01Z) - Generative AI-based Prompt Evolution Engineering Design Optimization With Vision-Language Model [22.535058343006828]
We present a prompt evolution design optimization (PEDO) framework contextualized in a vehicle design scenario.
We use a physics-based solver and a vision-language model for practical or functional guidance in the generated car designs.
Our investigations on a car design optimization problem show a wide spread of potential car designs generated at the early phase of the search.
arXiv Detail & Related papers (2024-06-13T14:11:19Z) - Bridging Model-Based Optimization and Generative Modeling via Conservative Fine-Tuning of Diffusion Models [54.132297393662654]
We introduce a hybrid method that fine-tunes cutting-edge diffusion models by optimizing reward models through RL.
We demonstrate the capability of our approach to outperform the best designs in offline data, leveraging the extrapolation capabilities of reward models.
arXiv Detail & Related papers (2024-05-30T03:57:29Z) - RACER: Rational Artificial Intelligence Car-following-model Enhanced by
Reality [51.244807332133696]
This paper introduces RACER, a cutting-edge deep learning car-following model to predict Adaptive Cruise Control (ACC) driving behavior.
Unlike conventional models, RACER effectively integrates Rational Driving Constraints (RDCs), crucial tenets of actual driving.
RACER excels across key metrics, such as acceleration, velocity, and spacing, registering zero violations.
arXiv Detail & Related papers (2023-12-12T06:21:30Z) - Aligning Optimization Trajectories with Diffusion Models for Constrained
Design Generation [17.164961143132473]
We introduce a learning framework that demonstrates the efficacy of aligning the sampling trajectory of diffusion models with the optimization trajectory derived from traditional physics-based methods.
Our method allows for generating feasible and high-performance designs in as few as two steps without the need for expensive preprocessing, external surrogate models, or additional labeled data.
Our results demonstrate that TA outperforms state-of-the-art deep generative models on in-distribution configurations and halves the inference computational cost.
arXiv Detail & Related papers (2023-05-29T09:16:07Z) - A Synergistic Framework Leveraging Autoencoders and Generative
Adversarial Networks for the Synthesis of Computational Fluid Dynamics
Results in Aerofoil Aerodynamics [0.5018156030818882]
This study proposes a novel approach that combines autoencoders and Generative Adversarial Networks (GANs) for the purpose of generating CFD results.
Our innovative framework harnesses the intrinsic capabilities of autoencoders to encode aerofoil geometries into a compressed and informative 20-length vector representation.
conditional GAN network adeptly translates this vector into precise pressure-distribution plots, accounting for fixed wind velocity, angle of attack, and turbulence level specifications.
arXiv Detail & Related papers (2023-05-28T09:46:18Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - Design Amortization for Bayesian Optimal Experimental Design [70.13948372218849]
We build off of successful variational approaches, which optimize a parameterized variational model with respect to bounds on the expected information gain (EIG)
We present a novel neural architecture that allows experimenters to optimize a single variational model that can estimate the EIG for potentially infinitely many designs.
arXiv Detail & Related papers (2022-10-07T02:12:34Z) - Unified Automatic Control of Vehicular Systems with Reinforcement
Learning [64.63619662693068]
This article contributes a streamlined methodology for vehicular microsimulation.
It discovers high performance control strategies with minimal manual design.
The study reveals numerous emergent behaviors resembling wave mitigation, traffic signaling, and ramp metering.
arXiv Detail & Related papers (2022-07-30T16:23:45Z) - Deep Learning-Based Inverse Design for Engineering Systems:
Multidisciplinary Design Optimization of Automotive Brakes [2.362412515574206]
Apparent piston travel (APT) and drag torque are the most representative factors for evaluating braking performance.
Recent studies on inverse design that use deep learning (DL) have established the possibility of instantly generating an optimal design.
MID achieved a similar performance to the single-disciplinary inverse design in terms of accuracy and computational cost.
arXiv Detail & Related papers (2022-02-27T08:29:50Z) - Reinforcement Learning to Optimize the Logistics Distribution Routes of
Unmanned Aerial Vehicle [0.0]
This paper proposes an improved method to achieve path planning for UAVs in complex surroundings: multiple no-fly zones.
The results show the feasibility and efficiency of the model applying in this kind of complicated situation.
arXiv Detail & Related papers (2020-04-21T09:42:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.