Continuous Optimization Benchmarks by Simulation
- URL: http://arxiv.org/abs/2008.06249v1
- Date: Fri, 14 Aug 2020 08:50:57 GMT
- Title: Continuous Optimization Benchmarks by Simulation
- Authors: Martin Zaefferer and Frederik Rehbach
- Abstract summary: Benchmark experiments are required to test, compare, tune, and understand optimization algorithms.
Data from previous evaluations can be used to train surrogate models which are then used for benchmarking.
We show that the spectral simulation method enables simulation for continuous optimization problems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Benchmark experiments are required to test, compare, tune, and understand
optimization algorithms. Ideally, benchmark problems closely reflect real-world
problem behavior. Yet, real-world problems are not always readily available for
benchmarking. For example, evaluation costs may be too high, or resources are
unavailable (e.g., software or equipment). As a solution, data from previous
evaluations can be used to train surrogate models which are then used for
benchmarking. The goal is to generate test functions on which the performance
of an algorithm is similar to that on the real-world objective function.
However, predictions from data-driven models tend to be smoother than the
ground-truth from which the training data is derived. This is especially
problematic when the training data becomes sparse. The resulting benchmarks may
not reflect the landscape features of the ground-truth, are too easy, and may
lead to biased conclusions. To resolve this, we use simulation of Gaussian
processes instead of estimation (or prediction). This retains the covariance
properties estimated during model training. While previous research suggested a
decomposition-based approach for a small-scale, discrete problem, we show that
the spectral simulation method enables simulation for continuous optimization
problems. In a set of experiments with an artificial ground-truth, we
demonstrate that this yields more accurate benchmarks than simply predicting
with the Gaussian process model.
Related papers
- Re-Evaluating LiDAR Scene Flow for Autonomous Driving [80.37947791534985]
Popular benchmarks for self-supervised LiDAR scene flow have unrealistic rates of dynamic motion, unrealistic correspondences, and unrealistic sampling patterns.
We evaluate a suite of top methods on a suite of real-world datasets.
We show that despite the emphasis placed on learning, most performance gains are caused by pre- and post-processing steps.
arXiv Detail & Related papers (2023-04-04T22:45:50Z) - Efficient Learning of Accurate Surrogates for Simulations of Complex Systems [0.0]
We introduce an online learning method empowered by sampling-driven sampling.
It ensures that all turning points on the model response surface are included in the training data.
We apply our method to simulations of nuclear matter to demonstrate that highly accurate surrogates can be reliably auto-generated.
arXiv Detail & Related papers (2022-07-11T20:51:11Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Model-based Offline Imitation Learning with Non-expert Data [7.615595533111191]
We propose a scalable model-based offline imitation learning algorithmic framework that leverages datasets collected by both suboptimal and optimal policies.
We show that the proposed method textitalways outperforms Behavioral Cloning in the low data regime on simulated continuous control domains.
arXiv Detail & Related papers (2022-06-11T13:08:08Z) - Evaluating State-of-the-Art Classification Models Against Bayes
Optimality [106.50867011164584]
We show that we can compute the exact Bayes error of generative models learned using normalizing flows.
We use our approach to conduct a thorough investigation of state-of-the-art classification models.
arXiv Detail & Related papers (2021-06-07T06:21:20Z) - Model-based Policy Optimization with Unsupervised Model Adaptation [37.09948645461043]
We investigate how to bridge the gap between real and simulated data due to inaccurate model estimation for better policy optimization.
We propose a novel model-based reinforcement learning framework AMPO, which introduces unsupervised model adaptation.
Our approach achieves state-of-the-art performance in terms of sample efficiency on a range of continuous control benchmark tasks.
arXiv Detail & Related papers (2020-10-19T14:19:42Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z) - AutoSimulate: (Quickly) Learning Synthetic Data Generation [70.82315853981838]
We propose an efficient alternative for optimal synthetic data generation based on a novel differentiable approximation of the objective.
We demonstrate that the proposed method finds the optimal data distribution faster (up to $50times$), with significantly reduced training data generation (up to $30times$) and better accuracy ($+8.7%$) on real-world test datasets than previous methods.
arXiv Detail & Related papers (2020-08-16T11:36:11Z) - Least Squares Regression with Markovian Data: Fundamental Limits and
Algorithms [69.45237691598774]
We study the problem of least squares linear regression where the data-points are dependent and are sampled from a Markov chain.
We establish sharp information theoretic minimax lower bounds for this problem in terms of $tau_mathsfmix$.
We propose an algorithm based on experience replay--a popular reinforcement learning technique--that achieves a significantly better error rate.
arXiv Detail & Related papers (2020-06-16T04:26:50Z) - Coping With Simulators That Don't Always Return [15.980496707498535]
We investigate inefficiencies that arise from adding process noise to deterministic simulators that fail to return for certain inputs.
We show how to train a conditional normalizing flow to propose perturbations such that the simulator succeeds with high probability.
arXiv Detail & Related papers (2020-03-28T23:05:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.