Simulated annealing for optimization of graphs and sequences
- URL: http://arxiv.org/abs/2110.01384v1
- Date: Fri, 1 Oct 2021 01:12:19 GMT
- Title: Simulated annealing for optimization of graphs and sequences
- Authors: Xianggen Liu, Pengyong Li, Fandong Meng, Hao Zhou, Huasong Zhong, Jie
Zhou, Lili Mou and Sen Song
- Abstract summary: We present SAGS, a novel Simulated Annealing framework for Graph and Sequence optimization.
We start by defining a sophisticated objective function, involving the property of interest and pre-defined constraints.
SAGS searches from the discrete space towards this objective by performing a sequence of local edits.
- Score: 44.974718959154266
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Optimization of discrete structures aims at generating a new structure with
the better property given an existing one, which is a fundamental problem in
machine learning. Different from the continuous optimization, the realistic
applications of discrete optimization (e.g., text generation) are very
challenging due to the complex and long-range constraints, including both
syntax and semantics, in discrete structures. In this work, we present SAGS, a
novel Simulated Annealing framework for Graph and Sequence optimization. The
key idea is to integrate powerful neural networks into metaheuristics (e.g.,
simulated annealing, SA) to restrict the search space in discrete optimization.
We start by defining a sophisticated objective function, involving the property
of interest and pre-defined constraints (e.g., grammar validity). SAGS searches
from the discrete space towards this objective by performing a sequence of
local edits, where deep generative neural networks propose the editing content
and thus can control the quality of editing. We evaluate SAGS on paraphrase
generation and molecule generation for sequence optimization and graph
optimization, respectively. Extensive results show that our approach achieves
state-of-the-art performance compared with existing paraphrase generation
methods in terms of both automatic and human evaluations. Further, SAGS also
significantly outperforms all the previous methods in molecule generation.
Related papers
- Explicit and Implicit Graduated Optimization in Deep Neural Networks [0.6906005491572401]
This paper experimentally evaluates the performance of an explicit graduated optimization algorithm with an optimal noise scheduling.
In addition, it demonstrates its effectiveness through experiments on image classification tasks with ResNet architectures.
arXiv Detail & Related papers (2024-12-16T07:23:22Z) - Evolutionary Pre-Prompt Optimization for Mathematical Reasoning [45.461506988071534]
This paper explores the optimization of example selection for designing effective chain-of-thought pre-prompts.
It shows that the choice of the algorithm, typically in favor of comparison-based methods such as evolutionary computation, significantly enhances efficacy and feasibility.
arXiv Detail & Related papers (2024-12-05T16:12:06Z) - PGSO: Prompt-based Generative Sequence Optimization Network for Aspect-based Sentiment Analysis [9.617652261815671]
We introduce two sequence optimization strategies: the rule-based static optimization and the score-based dynamic optimization.
Based on the dynamic optimization structure, we propose a unified Prompt-based Generative Sequence Optimization network (named PGSO)
Experiments conducted on four ABSA tasks across multiple benchmarks indicate that PGSO outperforms state-of-the-art methods, with an average improvement of 3.52% in F1 score.
arXiv Detail & Related papers (2024-12-01T10:49:55Z) - Towards Explainable Evolution Strategies with Large Language Models [0.0]
This paper introduces an approach that integrates self-adaptive Evolution Strategies (ES) with Large Language Models (LLMs)
By employing a self-adaptive ES equipped with a restart mechanism, we effectively navigate the challenging landscapes of benchmark functions.
An LLM is then utilized to process these logs, generating concise, user-friendly summaries.
arXiv Detail & Related papers (2024-07-11T09:28:27Z) - Performance Embeddings: A Similarity-based Approach to Automatic
Performance Optimization [71.69092462147292]
Performance embeddings enable knowledge transfer of performance tuning between applications.
We demonstrate this transfer tuning approach on case studies in deep neural networks, dense and sparse linear algebra compositions, and numerical weather prediction stencils.
arXiv Detail & Related papers (2023-03-14T15:51:35Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - EOS: a Parallel, Self-Adaptive, Multi-Population Evolutionary Algorithm
for Constrained Global Optimization [68.8204255655161]
EOS is a global optimization algorithm for constrained and unconstrained problems of real-valued variables.
It implements a number of improvements to the well-known Differential Evolution (DE) algorithm.
Results prove that EOSis capable of achieving increased performance compared to state-of-the-art single-population self-adaptive DE algorithms.
arXiv Detail & Related papers (2020-07-09T10:19:22Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.