Discovering Representations for Black-box Optimization
- URL: http://arxiv.org/abs/2003.04389v2
- Date: Sun, 5 Jul 2020 13:28:03 GMT
- Title: Discovering Representations for Black-box Optimization
- Authors: Adam Gaier, Alexander Asteroth, Jean-Baptiste Mouret
- Abstract summary: We show that black-box optimization encodings can be automatically learned, rather than hand designed.
We show that learned representations make it possible to solve high-dimensional problems with orders of magnitude fewer evaluations than the standard MAP-Elites.
- Score: 73.59962178534361
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The encoding of solutions in black-box optimization is a delicate,
handcrafted balance between expressiveness and domain knowledge -- between
exploring a wide variety of solutions, and ensuring that those solutions are
useful. Our main insight is that this process can be automated by generating a
dataset of high-performing solutions with a quality diversity algorithm (here,
MAP-Elites), then learning a representation with a generative model (here, a
Variational Autoencoder) from that dataset. Our second insight is that this
representation can be used to scale quality diversity optimization to higher
dimensions -- but only if we carefully mix solutions generated with the learned
representation and those generated with traditional variation operators. We
demonstrate these capabilities by learning an low-dimensional encoding for the
inverse kinematics of a thousand joint planar arm. The results show that
learned representations make it possible to solve high-dimensional problems
with orders of magnitude fewer evaluations than the standard MAP-Elites, and
that, once solved, the produced encoding can be used for rapid optimization of
novel, but similar, tasks. The presented techniques not only scale up quality
diversity algorithms to high dimensions, but show that black-box optimization
encodings can be automatically learned, rather than hand designed.
Related papers
- Optimizing Attention with Mirror Descent: Generalized Max-Margin Token Selection [6.759148939470332]
We show that algorithms converge in to a hard-margin SVM with an $ell_p$-norm objective.
Specifically, we show that these algorithms converge in to a generalized hard-margin SVM with an $ell_p$-norm objective.
arXiv Detail & Related papers (2024-10-18T16:32:06Z) - An Expandable Machine Learning-Optimization Framework to Sequential
Decision-Making [0.0]
We present an integrated prediction-optimization (PredOpt) framework to efficiently solve sequential decision-making problems.
We address the key issues of sequential dependence, infeasibility, and generalization in machine learning (ML) to make predictions for optimal solutions to instances problems.
arXiv Detail & Related papers (2023-11-12T21:54:53Z) - On the Suitability of Representations for Quality Diversity Optimization
of Shapes [77.34726150561087]
The representation, or encoding, utilized in evolutionary algorithms has a substantial effect on their performance.
This study compares the impact of several representations, including direct encoding, a dictionary-based representation, parametric encoding, compositional pattern producing networks, and cellular automata, on the generation of voxelized meshes.
arXiv Detail & Related papers (2023-04-07T07:34:23Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Linear Matrix Factorization Embeddings for Single-objective Optimization
Landscapes [0.755972004983746]
In black-box optimization, features have to be derived from a set of $(x,f(x))$ samples.
We show that a linear dimensionality reduction via matrix factorization significantly contributes towards a better detection of correlation between different problem instances.
arXiv Detail & Related papers (2020-09-30T08:46:54Z) - A Flexible Framework for Designing Trainable Priors with Adaptive
Smoothing and Game Encoding [57.1077544780653]
We introduce a general framework for designing and training neural network layers whose forward passes can be interpreted as solving non-smooth convex optimization problems.
We focus on convex games, solved by local agents represented by the nodes of a graph and interacting through regularization functions.
This approach is appealing for solving imaging problems, as it allows the use of classical image priors within deep models that are trainable end to end.
arXiv Detail & Related papers (2020-06-26T08:34:54Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z) - MetaSDF: Meta-learning Signed Distance Functions [85.81290552559817]
Generalizing across shapes with neural implicit representations amounts to learning priors over the respective function space.
We formalize learning of a shape space as a meta-learning problem and leverage gradient-based meta-learning algorithms to solve this task.
arXiv Detail & Related papers (2020-06-17T05:14:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.