Application of deep and reinforcement learning to boundary control
problems
- URL: http://arxiv.org/abs/2310.15191v1
- Date: Sat, 21 Oct 2023 10:56:32 GMT
- Title: Application of deep and reinforcement learning to boundary control
problems
- Authors: Zenin Easa Panthakkalakath, Juraj Kardo\v{s}, Olaf Schenk
- Abstract summary: The aim is to find the optimal values for the domain boundaries such that the enclosed domain attains the desired state values.
This project explores possibilities using deep learning and reinforcement learning to solve boundary control problems.
- Score: 0.6906005491572401
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The boundary control problem is a non-convex optimization and control problem
in many scientific domains, including fluid mechanics, structural engineering,
and heat transfer optimization. The aim is to find the optimal values for the
domain boundaries such that the enclosed domain adhering to the governing
equations attains the desired state values. Traditionally, non-linear
optimization methods, such as the Interior-Point method (IPM), are used to
solve such problems.
This project explores the possibilities of using deep learning and
reinforcement learning to solve boundary control problems. We adhere to the
framework of iterative optimization strategies, employing a spatial neural
network to construct well-informed initial guesses, and a spatio-temporal
neural network learns the iterative optimization algorithm using policy
gradients. Synthetic data, generated from the problems formulated in the
literature, is used for training, testing and validation. The numerical
experiments indicate that the proposed method can rival the speed and accuracy
of existing solvers. In our preliminary results, the network attains costs
lower than IPOPT, a state-of-the-art non-linear IPM, in 51\% cases. The overall
number of floating point operations in the proposed method is similar to that
of IPOPT. Additionally, the informed initial guess method and the learned
momentum-like behaviour in the optimizer method are incorporated to avoid
convergence to local minima.
Related papers
- A Simulation-Free Deep Learning Approach to Stochastic Optimal Control [12.699529713351287]
We propose a simulation-free algorithm for the solution of generic problems in optimal control (SOC)
Unlike existing methods, our approach does not require the solution of an adjoint problem.
arXiv Detail & Related papers (2024-10-07T16:16:53Z) - WANCO: Weak Adversarial Networks for Constrained Optimization problems [5.257895611010853]
We first transform minimax problems into minimax problems using the augmented Lagrangian method.
We then use two (or several) deep neural networks to represent the primal and dual variables respectively.
The parameters in the neural networks are then trained by an adversarial process.
arXiv Detail & Related papers (2024-07-04T05:37:48Z) - Learning rate adaptive stochastic gradient descent optimization methods: numerical simulations for deep learning methods for partial differential equations and convergence analyses [5.052293146674794]
It is known that the standard descent (SGD) optimization method, as well as accelerated and adaptive SGD optimization methods such as the Adam fail to converge if the learning rates do not converge to zero.
In this work we propose and study a learning-rate-adaptive approach for SGD optimization methods in which the learning rate is adjusted based on empirical estimates.
arXiv Detail & Related papers (2024-06-20T14:07:39Z) - Large-Scale OD Matrix Estimation with A Deep Learning Method [70.78575952309023]
The proposed method integrates deep learning and numerical optimization algorithms to infer matrix structure and guide numerical optimization.
We conducted tests to demonstrate the good generalization performance of our method on a large-scale synthetic dataset.
arXiv Detail & Related papers (2023-10-09T14:30:06Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - Learning Sampling Policy for Faster Derivative Free Optimization [100.27518340593284]
We propose a new reinforcement learning based ZO algorithm (ZO-RL) with learning the sampling policy for generating the perturbations in ZO optimization instead of using random sampling.
Our results show that our ZO-RL algorithm can effectively reduce the variances of ZO gradient by learning a sampling policy, and converge faster than existing ZO algorithms in different scenarios.
arXiv Detail & Related papers (2021-04-09T14:50:59Z) - Avoiding local minima in Variational Quantum Algorithms with Neural
Networks [0.0]
Variational Quantum Algorithms have emerged as a leading paradigm for near-term computation.
In this paper we present two algorithms within benchmark them on instances of the gradient landscape problem.
We suggest that our approach suggests that the cost landscape is a fruitful path to improving near-term quantum computing algorithms.
arXiv Detail & Related papers (2021-04-07T07:07:28Z) - Deep Reinforcement Learning for Field Development Optimization [0.0]
In this work, the goal is to apply convolutional neural network-based (CNN) deep reinforcement learning (DRL) algorithms to the field development optimization problem.
The proximal policy optimization (PPO) algorithm is considered with two CNN architectures of varying number of layers and composition.
Both networks obtained policies that provide satisfactory results when compared to a hybrid particle swarm optimization - mesh adaptive direct search (PSO-MADS) algorithm.
arXiv Detail & Related papers (2020-08-05T06:26:13Z) - IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method [64.15649345392822]
We introduce a framework for designing primal methods under the decentralized optimization setting where local functions are smooth and strongly convex.
Our approach consists of approximately solving a sequence of sub-problems induced by the accelerated augmented Lagrangian method.
When coupled with accelerated gradient descent, our framework yields a novel primal algorithm whose convergence rate is optimal and matched by recently derived lower bounds.
arXiv Detail & Related papers (2020-06-11T18:49:06Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z) - Optimizing Wireless Systems Using Unsupervised and
Reinforced-Unsupervised Deep Learning [96.01176486957226]
Resource allocation and transceivers in wireless networks are usually designed by solving optimization problems.
In this article, we introduce unsupervised and reinforced-unsupervised learning frameworks for solving both variable and functional optimization problems.
arXiv Detail & Related papers (2020-01-03T11:01:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.