PINN-BO: A Black-box Optimization Algorithm using Physics-Informed
Neural Networks
- URL: http://arxiv.org/abs/2402.03243v1
- Date: Mon, 5 Feb 2024 17:58:17 GMT
- Title: PINN-BO: A Black-box Optimization Algorithm using Physics-Informed
Neural Networks
- Authors: Dat Phan-Trong, Hung The Tran, Alistair Shilton, Sunil Gupta
- Abstract summary: Black-box optimization is a powerful approach for discovering global optima in noisy and expensive black-box functions.
We propose PINN-BO, a black-box optimization algorithm employing Physics-Informed Neural Networks.
We show that our algorithm is more sample-efficient compared to existing methods.
- Score: 11.618811218101058
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Black-box optimization is a powerful approach for discovering global optima
in noisy and expensive black-box functions, a problem widely encountered in
real-world scenarios. Recently, there has been a growing interest in leveraging
domain knowledge to enhance the efficacy of machine learning methods. Partial
Differential Equations (PDEs) often provide an effective means for elucidating
the fundamental principles governing the black-box functions. In this paper, we
propose PINN-BO, a black-box optimization algorithm employing Physics-Informed
Neural Networks that integrates the knowledge from Partial Differential
Equations (PDEs) to improve the sample efficiency of the optimization. We
analyze the theoretical behavior of our algorithm in terms of regret bound
using advances in NTK theory and prove that the use of the PDE alongside the
black-box function evaluations, PINN-BO leads to a tighter regret bound. We
perform several experiments on a variety of optimization tasks and show that
our algorithm is more sample-efficient compared to existing methods.
Related papers
- RoPINN: Region Optimized Physics-Informed Neural Networks [66.38369833561039]
Physics-informed neural networks (PINNs) have been widely applied to solve partial differential equations (PDEs)
This paper proposes and theoretically studies a new training paradigm as region optimization.
A practical training algorithm, Region Optimized PINN (RoPINN), is seamlessly derived from this new paradigm.
arXiv Detail & Related papers (2024-05-23T09:45:57Z) - Reinforced In-Context Black-Box Optimization [64.25546325063272]
RIBBO is a method to reinforce-learn a BBO algorithm from offline data in an end-to-end fashion.
RIBBO employs expressive sequence models to learn the optimization histories produced by multiple behavior algorithms and tasks.
Central to our method is to augment the optimization histories with textitregret-to-go tokens, which are designed to represent the performance of an algorithm based on cumulative regret over the future part of the histories.
arXiv Detail & Related papers (2024-02-27T11:32:14Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Neural-BO: A Black-box Optimization Algorithm using Deep Neural Networks [12.218039144209017]
We propose a novel black-box optimization algorithm where the black-box function is modeled using a neural network.
Our algorithm does not need a Bayesian neural network to estimate predictive uncertainty and is therefore computationally favorable.
arXiv Detail & Related papers (2023-03-03T02:53:56Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Generative Evolutionary Strategy For Black-Box Optimizations [0.0]
Black-box optimization in high-dimensional space is challenging.
Recent neural network-based black-box optimization studies have shown noteworthy achievements.
This study proposes a black-box optimization method based on the evolution strategy (ES) and the generative neural network (GNN) model.
arXiv Detail & Related papers (2022-05-06T07:34:21Z) - Neural Combinatorial Optimization: a New Player in the Field [69.23334811890919]
This paper presents a critical analysis on the incorporation of algorithms based on neural networks into the classical optimization framework.
A comprehensive study is carried out to analyse the fundamental aspects of such algorithms, including performance, transferability, computational cost and to larger-sized instances.
arXiv Detail & Related papers (2022-05-03T07:54:56Z) - High-dimensional Bayesian Optimization Algorithm with Recurrent Neural
Network for Disease Control Models in Time Series [1.9371782627708491]
We propose a new high dimensional Bayesian Optimization algorithm combining Recurrent neural networks.
The proposed RNN-BO algorithm can solve the optimal control problems in the lower dimension space.
We also discuss the impacts of different numbers of the RNN layers and training epochs on the trade-off between solution quality and related computational efforts.
arXiv Detail & Related papers (2022-01-01T08:40:17Z) - Meta Learning Black-Box Population-Based Optimizers [0.0]
We propose the use of meta-learning to infer population-based blackbox generalizations.
We show that the meta-loss function encourages a learned algorithm to alter its search behavior so that it can easily fit into a new context.
arXiv Detail & Related papers (2021-03-05T08:13:25Z) - Iterative Surrogate Model Optimization (ISMO): An active learning
algorithm for PDE constrained optimization with deep neural networks [14.380314061763508]
We present a novel active learning algorithm, termed as iterative surrogate model optimization (ISMO)
This algorithm is based on deep neural networks and its key feature is the iterative selection of training data through a feedback loop between deep neural networks and any underlying standard optimization algorithm.
arXiv Detail & Related papers (2020-08-13T07:31:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.