Transfer learning-enhanced deep reinforcement learning for aerodynamic airfoil optimisation subject to structural constraints
- URL: http://arxiv.org/abs/2505.02634v2
- Date: Fri, 01 Aug 2025 09:55:12 GMT
- Title: Transfer learning-enhanced deep reinforcement learning for aerodynamic airfoil optimisation subject to structural constraints
- Authors: David Ramos, Lucas Lacasa, Eusebio Valero, Gonzalo Rubio,
- Abstract summary: This paper introduces a transfer learning-enhanced deep reinforcement learning (DRL) methodology that is able to optimise the geometry of any airfoil.<n>To showcase the method, we aim to maximise the lift-to-drag ratio $C_L/C_D$ while preserving the structural integrity of the airfoil.<n>The performance of the DRL agent is compared with Particle Swarm optimisation (PSO), a traditional gradient-free method.
- Score: 1.5468177185307304
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The main objective of this paper is to introduce a transfer learning-enhanced deep reinforcement learning (DRL) methodology that is able to optimise the geometry of any airfoil based on concomitant aerodynamic and structural integrity criteria. To showcase the method, we aim to maximise the lift-to-drag ratio $C_L/C_D$ while preserving the structural integrity of the airfoil -- as modelled by its maximum thickness -- and train the DRL agent using a list of different transfer learning (TL) strategies. The performance of the DRL agent is compared with Particle Swarm Optimisation (PSO), a traditional gradient-free optimisation method. Results indicate that DRL agents are able to perform purely aerodynamic and hybrid aerodynamic/structural shape optimisation, that the DRL approach outperforms PSO in terms of computational efficiency and aerodynamic improvement, and that the TL-enhanced DRL agent achieves performance comparable to the DRL one, while further saving substantial computational resources.
Related papers
- Diffusion Guidance Is a Controllable Policy Improvement Operator [98.11511661904618]
CFGRL is trained with the simplicity of supervised learning, yet can further improve on the policies in the data.<n>On offline RL tasks, we observe a reliable trend -- increased guidance weighting leads to increased performance.
arXiv Detail & Related papers (2025-05-29T14:06:50Z) - Preference Optimization for Combinatorial Optimization Problems [54.87466279363487]
Reinforcement Learning (RL) has emerged as a powerful tool for neural optimization, enabling models learns that solve complex problems without requiring expert knowledge.<n>Despite significant progress, existing RL approaches face challenges such as diminishing reward signals and inefficient exploration in vast action spaces.<n>We propose Preference Optimization, a novel method that transforms quantitative reward signals into qualitative preference signals via statistical comparison modeling.
arXiv Detail & Related papers (2025-05-13T16:47:00Z) - Adaptive Multi-Fidelity Reinforcement Learning for Variance Reduction in Engineering Design Optimization [0.0]
Multi-fidelity Reinforcement Learning (RL) frameworks efficiently utilize computational resources by integrating analysis models of varying accuracy and costs.<n>This work proposes a novel adaptive multi-fidelity RL framework, in which multiple heterogeneous, non-hierarchical low-fidelity models are dynamically leveraged alongside a high-fidelity model.<n>The effectiveness of the approach is demonstrated in an octocopter design optimization problem, utilizing two low-fidelity models alongside a high-fidelity simulator.
arXiv Detail & Related papers (2025-03-23T22:29:08Z) - Online Merging Optimizers for Boosting Rewards and Mitigating Tax in Alignment [47.682736928029996]
Large Language Models (LLMs) are designed to align with human-centric values while preventing the degradation of abilities acquired through Pre-training and Supervised Fine-tuning (SFT)
In this paper, we show that interpolating RLHF and SFT model parameters can adjust the trade-off between human preference and basic capabilities, thereby reducing the alignment tax.
It significantly enhances alignment reward while mitigating alignment tax, achieving higher overall performance across 14 benchmarks.
arXiv Detail & Related papers (2024-05-28T07:53:40Z) - Enhancing Vehicle Aerodynamics with Deep Reinforcement Learning in Voxelised Models [6.16808916207942]
This paper presents a novel approach for aerodynamic optimisation in car design using deep reinforcement learning (DRL)
The proposed approach uses voxelised models to discretise the vehicle geometry into a grid of voxels, allowing for a detailed representation of the aerodynamic flow field.
Experimental results demonstrate the effectiveness and efficiency of the proposed approach in achieving significant results in aerodynamic performance.
arXiv Detail & Related papers (2024-05-19T09:19:31Z) - DiffTORI: Differentiable Trajectory Optimization for Deep Reinforcement and Imitation Learning [19.84386060857712]
This paper introduces DiffTORI, which utilizes Differentiable Trajectory optimization as the policy representation to generate actions for deep Reinforcement and Imitation learning.<n>Across 15 model-based RL tasks and 35 imitation learning tasks with high-dimensional image and point cloud inputs, DiffTORI outperforms prior state-of-the-art methods in both domains.
arXiv Detail & Related papers (2024-02-08T05:26:40Z) - Hybrid Reinforcement Learning for Optimizing Pump Sustainability in
Real-World Water Distribution Networks [55.591662978280894]
This article addresses the pump-scheduling optimization problem to enhance real-time control of real-world water distribution networks (WDNs)
Our primary objectives are to adhere to physical operational constraints while reducing energy consumption and operational costs.
Traditional optimization techniques, such as evolution-based and genetic algorithms, often fall short due to their lack of convergence guarantees.
arXiv Detail & Related papers (2023-10-13T21:26:16Z) - A reinforcement learning strategy for p-adaptation in high order solvers [0.0]
Reinforcement learning (RL) has emerged as a promising approach to automating decision processes.
This paper explores the application of RL techniques to optimise the order in the computational mesh when using high-order solvers.
arXiv Detail & Related papers (2023-06-14T07:01:31Z) - Optimal Goal-Reaching Reinforcement Learning via Quasimetric Learning [73.80728148866906]
Quasimetric Reinforcement Learning (QRL) is a new RL method that utilizes quasimetric models to learn optimal value functions.
On offline and online goal-reaching benchmarks, QRL also demonstrates improved sample efficiency and performance.
arXiv Detail & Related papers (2023-04-03T17:59:58Z) - Progressive extension of reinforcement learning action dimension for
asymmetric assembly tasks [7.4642148614421995]
In this paper, a progressive extension of action dimension (PEAD) mechanism is proposed to optimize the convergence of RL algorithms.
The results demonstrate the PEAD method will enhance the data-efficiency and time-efficiency of RL algorithms as well as increase the stable reward.
arXiv Detail & Related papers (2021-04-06T11:48:54Z) - Transfer Deep Reinforcement Learning-enabled Energy Management Strategy
for Hybrid Tracked Vehicle [8.327437591702163]
This paper proposes an adaptive energy management strategy for hybrid electric vehicles by combining deep reinforcement learning (DRL) and transfer learning (TL)
It aims to address the defect of DRL in tedious training time.
The founded DRL and TL-enabled control policy is capable of enhancing energy efficiency and improving system performance.
arXiv Detail & Related papers (2020-07-16T23:39:34Z) - Optimization-driven Deep Reinforcement Learning for Robust Beamforming
in IRS-assisted Wireless Communications [54.610318402371185]
Intelligent reflecting surface (IRS) is a promising technology to assist downlink information transmissions from a multi-antenna access point (AP) to a receiver.
We minimize the AP's transmit power by a joint optimization of the AP's active beamforming and the IRS's passive beamforming.
We propose a deep reinforcement learning (DRL) approach that can adapt the beamforming strategies from past experiences.
arXiv Detail & Related papers (2020-05-25T01:42:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.