PINN-BO: A Black-box Optimization Algorithm using Physics-Informed
Neural Networks
- URL: http://arxiv.org/abs/2402.03243v1
- Date: Mon, 5 Feb 2024 17:58:17 GMT
- Title: PINN-BO: A Black-box Optimization Algorithm using Physics-Informed
Neural Networks
- Authors: Dat Phan-Trong, Hung The Tran, Alistair Shilton, Sunil Gupta
- Abstract summary: Black-box optimization is a powerful approach for discovering global optima in noisy and expensive black-box functions.
We propose PINN-BO, a black-box optimization algorithm employing Physics-Informed Neural Networks.
We show that our algorithm is more sample-efficient compared to existing methods.
- Score: 11.618811218101058
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Black-box optimization is a powerful approach for discovering global optima
in noisy and expensive black-box functions, a problem widely encountered in
real-world scenarios. Recently, there has been a growing interest in leveraging
domain knowledge to enhance the efficacy of machine learning methods. Partial
Differential Equations (PDEs) often provide an effective means for elucidating
the fundamental principles governing the black-box functions. In this paper, we
propose PINN-BO, a black-box optimization algorithm employing Physics-Informed
Neural Networks that integrates the knowledge from Partial Differential
Equations (PDEs) to improve the sample efficiency of the optimization. We
analyze the theoretical behavior of our algorithm in terms of regret bound
using advances in NTK theory and prove that the use of the PDE alongside the
black-box function evaluations, PINN-BO leads to a tighter regret bound. We
perform several experiments on a variety of optimization tasks and show that
our algorithm is more sample-efficient compared to existing methods.
Related papers
- Sharpness-Aware Black-Box Optimization [47.95184866255126]
We propose a Sharpness-Aware Black-box Optimization (SABO) algorithm, which applies a sharpness-aware minimization strategy to improve the model generalization.
Empirically, extensive experiments on the black-box prompt fine-tuning tasks demonstrate the effectiveness of the proposed SABO method in improving model generalization performance.
arXiv Detail & Related papers (2024-10-16T11:08:06Z) - Enhancing Deep Learning with Optimized Gradient Descent: Bridging Numerical Methods and Neural Network Training [0.036651088217486416]
This paper explores the relationship between optimization theory and deep learning.
We introduce an enhancement to the descent algorithm, highlighting its variants, which are the cornerstone of neural networks.
Our experiments on diverse deep learning tasks substantiate the improved algorithm's efficacy.
arXiv Detail & Related papers (2024-09-07T04:37:20Z) - Reinforced In-Context Black-Box Optimization [64.25546325063272]
RIBBO is a method to reinforce-learn a BBO algorithm from offline data in an end-to-end fashion.
RIBBO employs expressive sequence models to learn the optimization histories produced by multiple behavior algorithms and tasks.
Central to our method is to augment the optimization histories with textitregret-to-go tokens, which are designed to represent the performance of an algorithm based on cumulative regret over the future part of the histories.
arXiv Detail & Related papers (2024-02-27T11:32:14Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Neural-BO: A Black-box Optimization Algorithm using Deep Neural Networks [12.218039144209017]
We propose a novel black-box optimization algorithm where the black-box function is modeled using a neural network.
Our algorithm does not need a Bayesian neural network to estimate predictive uncertainty and is therefore computationally favorable.
arXiv Detail & Related papers (2023-03-03T02:53:56Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Generative Evolutionary Strategy For Black-Box Optimizations [0.0]
Black-box optimization in high-dimensional space is challenging.
Recent neural network-based black-box optimization studies have shown noteworthy achievements.
This study proposes a black-box optimization method based on the evolution strategy (ES) and the generative neural network (GNN) model.
arXiv Detail & Related papers (2022-05-06T07:34:21Z) - Neural Combinatorial Optimization: a New Player in the Field [69.23334811890919]
This paper presents a critical analysis on the incorporation of algorithms based on neural networks into the classical optimization framework.
A comprehensive study is carried out to analyse the fundamental aspects of such algorithms, including performance, transferability, computational cost and to larger-sized instances.
arXiv Detail & Related papers (2022-05-03T07:54:56Z) - Meta Learning Black-Box Population-Based Optimizers [0.0]
We propose the use of meta-learning to infer population-based blackbox generalizations.
We show that the meta-loss function encourages a learned algorithm to alter its search behavior so that it can easily fit into a new context.
arXiv Detail & Related papers (2021-03-05T08:13:25Z) - Iterative Surrogate Model Optimization (ISMO): An active learning
algorithm for PDE constrained optimization with deep neural networks [14.380314061763508]
We present a novel active learning algorithm, termed as iterative surrogate model optimization (ISMO)
This algorithm is based on deep neural networks and its key feature is the iterative selection of training data through a feedback loop between deep neural networks and any underlying standard optimization algorithm.
arXiv Detail & Related papers (2020-08-13T07:31:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.