Bayesian Optimisation of Functions on Graphs
- URL: http://arxiv.org/abs/2306.05304v2
- Date: Sun, 29 Oct 2023 17:16:12 GMT
- Title: Bayesian Optimisation of Functions on Graphs
- Authors: Xingchen Wan, Pierre Osselin, Henry Kenlay, Binxin Ru, Michael A.
Osborne, Xiaowen Dong
- Abstract summary: We propose a novel Bayesian optimisation framework that optimises over functions defined on generic, large-scale and potentially unknown graphs.
Through the learning of suitable kernels on graphs, our framework has the advantage of adapting to the behaviour of the target function.
The local modelling approach further guarantees the efficiency of our method.
- Score: 33.97967750232631
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The increasing availability of graph-structured data motivates the task of
optimising over functions defined on the node set of graphs. Traditional graph
search algorithms can be applied in this case, but they may be
sample-inefficient and do not make use of information about the function
values; on the other hand, Bayesian optimisation is a class of promising
black-box solvers with superior sample efficiency, but it has been scarcely
been applied to such novel setups. To fill this gap, we propose a novel
Bayesian optimisation framework that optimises over functions defined on
generic, large-scale and potentially unknown graphs. Through the learning of
suitable kernels on graphs, our framework has the advantage of adapting to the
behaviour of the target function. The local modelling approach further
guarantees the efficiency of our method. Extensive experiments on both
synthetic and real-world graphs demonstrate the effectiveness of the proposed
optimisation framework.
Related papers
- Fast Semi-supervised Learning on Large Graphs: An Improved Green-function Method [93.04936336359502]
In graph-based semi-supervised learning, the Green-function method is a classical method that works by computing the Green's function in the graph space.
We make a detailed analysis on it and propose a novel method from the perspective of optimization.
Unlike the original method, our improved method can also apply two accelerating techniques, Gaussian Elimination, and Anchored Graphs.
arXiv Detail & Related papers (2024-11-04T04:27:18Z) - Bayesian Optimization of Functions over Node Subsets in Graphs [14.670181702535825]
We propose a novel framework for optimization on graphs.
We map each $k$-node in the original graph to a node in a new graph.
Experiments under both synthetic and real-world setups demonstrate the effectiveness of the proposed BO framework.
arXiv Detail & Related papers (2024-05-24T00:24:55Z) - Functional Causal Bayesian Optimization [21.67333624383642]
fCBO is a method for finding interventions that optimize a target variable in a known causal graph.
We introduce graphical criteria that establish when considering functional interventions, and conditions under which selected interventions are also optimal for conditional target effects.
arXiv Detail & Related papers (2023-06-10T11:02:53Z) - Bayesian Optimization with Informative Covariance [13.113313427848828]
We propose novel informative covariance functions for optimization, leveraging nonstationarity to encode preferences for certain regions of the search space.
We demonstrate that the proposed functions can increase the sample efficiency of Bayesian optimization in high dimensions, even under weak prior information.
arXiv Detail & Related papers (2022-08-04T15:05:11Z) - Alternately Optimized Graph Neural Networks [33.98939289745346]
We propose a new optimization framework for semi-supervised learning on graphs.
The proposed framework can be conveniently solved by the alternating optimization algorithms, resulting in significantly improved efficiency.
arXiv Detail & Related papers (2022-06-08T01:50:08Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Are we Forgetting about Compositional Optimisers in Bayesian
Optimisation? [66.39551991177542]
This paper presents a sample methodology for global optimisation.
Within this, a crucial performance-determiningtrivial is maximising the acquisition function.
We highlight the empirical advantages of the approach to optimise functionation across 3958 individual experiments.
arXiv Detail & Related papers (2020-12-15T12:18:38Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z) - Composition of kernel and acquisition functions for High Dimensional
Bayesian Optimization [0.1749935196721634]
We use the addition-ality of the objective function into mapping both the kernel and the acquisition function of the Bayesian Optimization.
This ap-proach makes more efficient the learning/updating of the probabilistic surrogate model.
Results are presented for real-life application, that is the control of pumps in urban water distribution systems.
arXiv Detail & Related papers (2020-03-09T15:45:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.