Mixed-Integer Optimisation of Graph Neural Networks for Computer-Aided
Molecular Design
- URL: http://arxiv.org/abs/2312.01228v1
- Date: Sat, 2 Dec 2023 21:10:18 GMT
- Title: Mixed-Integer Optimisation of Graph Neural Networks for Computer-Aided
Molecular Design
- Authors: Tom McDonald, Calvin Tsay, Artur M. Schweidtmann, Neil Yorke-Smith
- Abstract summary: ReLU neural networks have been modelled as constraints in mixed integer linear programming (MILP)
We propose a formulation for ReLU Graph Convolutional Neural Networks and a MILP formulation for ReLU GraphSAGE models.
These formulations enable solving optimisation problems with trained GNNs embedded to global optimality.
- Score: 4.593587844188084
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: ReLU neural networks have been modelled as constraints in mixed integer
linear programming (MILP), enabling surrogate-based optimisation in various
domains and efficient solution of machine learning certification problems.
However, previous works are mostly limited to MLPs. Graph neural networks
(GNNs) can learn from non-euclidean data structures such as molecular
structures efficiently and are thus highly relevant to computer-aided molecular
design (CAMD). We propose a bilinear formulation for ReLU Graph Convolutional
Neural Networks and a MILP formulation for ReLU GraphSAGE models. These
formulations enable solving optimisation problems with trained GNNs embedded to
global optimality. We apply our optimization approach to an illustrative CAMD
case study where the formulations of the trained GNNs are used to design
molecules with optimal boiling points.
Related papers
- Diffusion Models as Network Optimizers: Explorations and Analysis [71.69869025878856]
generative diffusion models (GDMs) have emerged as a promising new approach to network optimization.
In this study, we first explore the intrinsic characteristics of generative models.
We provide a concise theoretical and intuitive demonstration of the advantages of generative models over discriminative network optimization.
arXiv Detail & Related papers (2024-11-01T09:05:47Z) - Deep learning enhanced mixed integer optimization: Learning to reduce model dimensionality [0.0]
This work introduces a framework to address the computational complexity inherent in Mixed-Integer Programming.
By employing deep learning, we construct problem-specific models that identify and exploit common structures across MIP instances.
We present an algorithm for generating synthetic data enhancing the robustness and generalizability of our models.
arXiv Detail & Related papers (2024-01-17T19:15:13Z) - Functional Graphical Models: Structure Enables Offline Data-Driven Optimization [111.28605744661638]
We show how structure can enable sample-efficient data-driven optimization.
We also present a data-driven optimization algorithm that infers the FGM structure itself.
arXiv Detail & Related papers (2024-01-08T22:33:14Z) - Optimization Over Trained Neural Networks: Taking a Relaxing Walk [4.517039147450688]
We propose a more scalable solver based on exploring global and local linear relaxations of the neural network model.
Our solver is competitive with a state-of-the-art MILP solver and the prior while producing better solutions with increases in input, depth, and number of neurons.
arXiv Detail & Related papers (2024-01-07T11:15:00Z) - The Convex Landscape of Neural Networks: Characterizing Global Optima
and Stationary Points via Lasso Models [75.33431791218302]
Deep Neural Network Network (DNN) models are used for programming purposes.
In this paper we examine the use of convex neural recovery models.
We show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
We also show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
arXiv Detail & Related papers (2023-12-19T23:04:56Z) - Solve Large-scale Unit Commitment Problems by Physics-informed Graph
Learning [1.1748284119769041]
Unit commitment (UC) problems are typically formulated as mixed-integer programs (MIP) and solved by the branch-and-bound (B&B) scheme.
Recent advances in graph neural networks (GNN) enable it to enhance the B&B algorithm in modern MIP solvers by learning to dive and branch.
arXiv Detail & Related papers (2023-11-26T07:17:45Z) - Fixing the NTK: From Neural Network Linearizations to Exact Convex
Programs [63.768739279562105]
We show that for a particular choice of mask weights that do not depend on the learning targets, this kernel is equivalent to the NTK of the gated ReLU network on the training data.
A consequence of this lack of dependence on the targets is that the NTK cannot perform better than the optimal MKL kernel on the training set.
arXiv Detail & Related papers (2023-09-26T17:42:52Z) - Optimization-Informed Neural Networks [0.6853165736531939]
We propose optimization-informed neural networks (OINN) to solve constrained nonlinear optimization problems.
In a nutshell, OINN transforms a CNLP into a neural network training problem.
The effectiveness of the proposed approach is demonstrated through a collection of classical problems.
arXiv Detail & Related papers (2022-10-05T09:28:55Z) - On Representing Linear Programs by Graph Neural Networks [30.998508318941017]
Graph neural network (GNN) is considered a suitable machine learning model for optimization problems.
This paper establishes the theoretical foundation of applying GNNs to solving linear program (LP) problems.
We show that properly built GNNs can reliably predict feasibility, boundedness, and an optimal solution for each LP in a broad class.
arXiv Detail & Related papers (2022-09-25T18:27:33Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.