Enhancing Vehicle Aerodynamics with Deep Reinforcement Learning in Voxelised Models
- URL: http://arxiv.org/abs/2405.11492v1
- Date: Sun, 19 May 2024 09:19:31 GMT
- Title: Enhancing Vehicle Aerodynamics with Deep Reinforcement Learning in Voxelised Models
- Authors: Jignesh Patel, Yannis Spyridis, Vasileios Argyriou,
- Abstract summary: This paper presents a novel approach for aerodynamic optimisation in car design using deep reinforcement learning (DRL)
The proposed approach uses voxelised models to discretise the vehicle geometry into a grid of voxels, allowing for a detailed representation of the aerodynamic flow field.
Experimental results demonstrate the effectiveness and efficiency of the proposed approach in achieving significant results in aerodynamic performance.
- Score: 6.16808916207942
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Aerodynamic design optimisation plays a crucial role in improving the performance and efficiency of automotive vehicles. This paper presents a novel approach for aerodynamic optimisation in car design using deep reinforcement learning (DRL). Traditional optimisation methods often face challenges in handling the complexity of the design space and capturing non-linear relationships between design parameters and aerodynamic performance metrics. This study addresses these challenges by employing DRL to learn optimal aerodynamic design strategies in a voxelised model representation. The proposed approach utilises voxelised models to discretise the vehicle geometry into a grid of voxels, allowing for a detailed representation of the aerodynamic flow field. The Proximal Policy Optimisation (PPO) algorithm is then employed to train a DRL agent to optimise the design parameters of the vehicle with respect to drag force, kinetic energy, and voxel collision count. Experimental results demonstrate the effectiveness and efficiency of the proposed approach in achieving significant results in aerodynamic performance. The findings highlight the potential of DRL techniques for addressing complex aerodynamic design optimisation problems in automotive engineering, with implications for improving vehicle performance, fuel efficiency, and environmental sustainability.
Related papers
- Adjoint-Based Aerodynamic Shape Optimization with a Manifold Constraint Learned by Diffusion Models [12.019764781438603]
We introduce an adjoint-based aerodynamic shape optimization framework that integrates a diffusion model trained on existing designs to learn a smooth manifold of aerodynamically viable shapes.<n>We demonstrate how AI generated priors integrates effectively with adjoint methods to enable robust, high-fidelity aerodynamic shape optimization through automatic differentiation.
arXiv Detail & Related papers (2025-07-31T11:21:20Z) - Iterative Distillation for Reward-Guided Fine-Tuning of Diffusion Models in Biomolecular Design [53.93023688824764]
We address the problem of fine-tuning diffusion models for reward-guided generation in biomolecular design.<n>We propose an iterative distillation-based fine-tuning framework that enables diffusion models to optimize for arbitrary reward functions.<n>Our off-policy formulation, combined with KL divergence minimization, enhances training stability and sample efficiency compared to existing RL-based methods.
arXiv Detail & Related papers (2025-07-01T05:55:28Z) - Preference Optimization for Combinatorial Optimization Problems [54.87466279363487]
Reinforcement Learning (RL) has emerged as a powerful tool for neural optimization, enabling models learns that solve complex problems without requiring expert knowledge.<n>Despite significant progress, existing RL approaches face challenges such as diminishing reward signals and inefficient exploration in vast action spaces.<n>We propose Preference Optimization, a novel method that transforms quantitative reward signals into qualitative preference signals via statistical comparison modeling.
arXiv Detail & Related papers (2025-05-13T16:47:00Z) - Transfer learning-enhanced deep reinforcement learning for aerodynamic airfoil optimisation subject to structural constraints [1.5468177185307304]
This paper introduces a transfer learning-enhanced deep reinforcement learning (DRL) methodology that is able to optimise the geometry of any airfoil.<n>To showcase the method, we aim to maximise the lift-to-drag ratio $C_L/C_D$ while preserving the structural integrity of the airfoil.<n>The performance of the DRL agent is compared with Particle Swarm optimisation (PSO), a traditional gradient-free method.
arXiv Detail & Related papers (2025-05-05T13:26:11Z) - AI-Enhanced Automatic Design of Efficient Underwater Gliders [60.45821679800442]
Building an automated design framework is challenging due to the complexities of representing glider shapes and the high computational costs associated with modeling complex solid-fluid interactions.
We introduce an AI-enhanced automated computational framework designed to overcome these limitations by enabling the creation of underwater robots with non-trivial hull shapes.
Our approach involves an algorithm that co-optimizes both shape and control signals, utilizing a reduced-order geometry representation and a differentiable neural-network-based fluid surrogate model.
arXiv Detail & Related papers (2025-04-30T23:55:44Z) - Efficient Design of Compliant Mechanisms Using Multi-Objective Optimization [50.24983453990065]
We address the synthesis of a compliant cross-hinge mechanism capable of large angular strokes.
We formulate a multi-objective optimization problem based on kinetostatic performance measures.
arXiv Detail & Related papers (2025-04-23T06:29:10Z) - Accelerated Gradient-based Design Optimization Via Differentiable Physics-Informed Neural Operator: A Composites Autoclave Processing Case Study [0.0]
We introduce a novel Physics-Informed DeepONet (PIDON) architecture to effectively model the nonlinear behavior of complex engineering systems.
We demonstrate the effectiveness of this framework in the optimization of aerospace-grade composites curing processes achieving a 3x speedup.
The proposed model has the potential to be used as a scalable and efficient optimization tool for broader applications in advanced engineering and digital twin systems.
arXiv Detail & Related papers (2025-02-17T07:11:46Z) - Airfoil Diffusion: Denoising Diffusion Model For Conditional Airfoil Generation [7.136205674624813]
We introduce a data-driven methodology for airfoil generation using a diffusion model.
Trained on a dataset of preexisting airfoils, our model can generate an arbitrary number of new airfoils from random vectors.
arXiv Detail & Related papers (2024-08-28T16:12:16Z) - Diffusion Models as Optimizers for Efficient Planning in Offline RL [47.0835433289033]
Diffusion models have shown strong competitiveness in offline reinforcement learning tasks.
We propose a faster autoregressive model to handle the generation of feasible trajectories.
This allows us to achieve more efficient planning without sacrificing capability.
arXiv Detail & Related papers (2024-07-23T03:00:01Z) - Generative AI-based Prompt Evolution Engineering Design Optimization With Vision-Language Model [22.535058343006828]
We present a prompt evolution design optimization (PEDO) framework contextualized in a vehicle design scenario.
We use a physics-based solver and a vision-language model for practical or functional guidance in the generated car designs.
Our investigations on a car design optimization problem show a wide spread of potential car designs generated at the early phase of the search.
arXiv Detail & Related papers (2024-06-13T14:11:19Z) - Bridging Model-Based Optimization and Generative Modeling via Conservative Fine-Tuning of Diffusion Models [54.132297393662654]
We introduce a hybrid method that fine-tunes cutting-edge diffusion models by optimizing reward models through RL.
We demonstrate the capability of our approach to outperform the best designs in offline data, leveraging the extrapolation capabilities of reward models.
arXiv Detail & Related papers (2024-05-30T03:57:29Z) - RACER: Rational Artificial Intelligence Car-following-model Enhanced by
Reality [51.244807332133696]
This paper introduces RACER, a cutting-edge deep learning car-following model to predict Adaptive Cruise Control (ACC) driving behavior.
Unlike conventional models, RACER effectively integrates Rational Driving Constraints (RDCs), crucial tenets of actual driving.
RACER excels across key metrics, such as acceleration, velocity, and spacing, registering zero violations.
arXiv Detail & Related papers (2023-12-12T06:21:30Z) - Aligning Optimization Trajectories with Diffusion Models for Constrained
Design Generation [17.164961143132473]
We introduce a learning framework that demonstrates the efficacy of aligning the sampling trajectory of diffusion models with the optimization trajectory derived from traditional physics-based methods.
Our method allows for generating feasible and high-performance designs in as few as two steps without the need for expensive preprocessing, external surrogate models, or additional labeled data.
Our results demonstrate that TA outperforms state-of-the-art deep generative models on in-distribution configurations and halves the inference computational cost.
arXiv Detail & Related papers (2023-05-29T09:16:07Z) - A Synergistic Framework Leveraging Autoencoders and Generative
Adversarial Networks for the Synthesis of Computational Fluid Dynamics
Results in Aerofoil Aerodynamics [0.5018156030818882]
This study proposes a novel approach that combines autoencoders and Generative Adversarial Networks (GANs) for the purpose of generating CFD results.
Our innovative framework harnesses the intrinsic capabilities of autoencoders to encode aerofoil geometries into a compressed and informative 20-length vector representation.
conditional GAN network adeptly translates this vector into precise pressure-distribution plots, accounting for fixed wind velocity, angle of attack, and turbulence level specifications.
arXiv Detail & Related papers (2023-05-28T09:46:18Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - Design Amortization for Bayesian Optimal Experimental Design [70.13948372218849]
We build off of successful variational approaches, which optimize a parameterized variational model with respect to bounds on the expected information gain (EIG)
We present a novel neural architecture that allows experimenters to optimize a single variational model that can estimate the EIG for potentially infinitely many designs.
arXiv Detail & Related papers (2022-10-07T02:12:34Z) - Unified Automatic Control of Vehicular Systems with Reinforcement
Learning [64.63619662693068]
This article contributes a streamlined methodology for vehicular microsimulation.
It discovers high performance control strategies with minimal manual design.
The study reveals numerous emergent behaviors resembling wave mitigation, traffic signaling, and ramp metering.
arXiv Detail & Related papers (2022-07-30T16:23:45Z) - Deep Learning-Based Inverse Design for Engineering Systems:
Multidisciplinary Design Optimization of Automotive Brakes [2.362412515574206]
Apparent piston travel (APT) and drag torque are the most representative factors for evaluating braking performance.
Recent studies on inverse design that use deep learning (DL) have established the possibility of instantly generating an optimal design.
MID achieved a similar performance to the single-disciplinary inverse design in terms of accuracy and computational cost.
arXiv Detail & Related papers (2022-02-27T08:29:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.