Machine learning for improving performance in an evolutionary algorithm
for minimum path with uncertain costs given by massively simulated scenarios
- URL: http://arxiv.org/abs/2102.01830v1
- Date: Wed, 3 Feb 2021 01:38:35 GMT
- Title: Machine learning for improving performance in an evolutionary algorithm
for minimum path with uncertain costs given by massively simulated scenarios
- Authors: Ricardo Di Pasquale and Javier Marenco
- Abstract summary: We introduce an implementation for which machine learning techniques helped improve the overall performance of an evolutionary algorithm for an optimization problem.
In this big data optimization problem, a path achieving a good cost in most scenarios from an available set of scenarios (generated by a simulation process) must be obtained.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this work we introduce an implementation for which machine learning
techniques helped improve the overall performance of an evolutionary algorithm
for an optimization problem, namely a variation of robust minimum-cost path in
graphs. In this big data optimization problem, a path achieving a good cost in
most scenarios from an available set of scenarios (generated by a simulation
process) must be obtained. The most expensive task of our evolutionary
algorithm, in terms of computational resources, is the evaluation of candidate
paths: the fitness function must calculate the cost of the candidate path in
every generated scenario. Given the large number of scenarios, this task must
be implemented in a distributed environment. We implemented gradient boosting
decision trees to classify candidate paths in order to identify good
candidates. The cost of the not-so-good candidates is simply forecasted. We
studied the training process, gain performance, accuracy, and other variables.
Our computational experiments show that the computational performance was
significantly improved at the expense of a limited loss of accuracy.
Related papers
- Cost-Aware Query Policies in Active Learning for Efficient Autonomous Robotic Exploration [0.0]
This paper analyzes an AL algorithm for Gaussian Process regression while incorporating action cost.
Traditional uncertainty metric with a distance constraint best minimizes root-mean-square error over trajectory distance.
arXiv Detail & Related papers (2024-10-31T18:35:03Z) - Switchable Decision: Dynamic Neural Generation Networks [98.61113699324429]
We propose a switchable decision to accelerate inference by dynamically assigning resources for each data instance.
Our method benefits from less cost during inference while keeping the same accuracy.
arXiv Detail & Related papers (2024-05-07T17:44:54Z) - Practical Layout-Aware Analog/Mixed-Signal Design Automation with
Bayesian Neural Networks [5.877728608070716]
Many learning-based algorithms require thousands of simulated data points, which is impractical for expensive to simulate circuits.
We propose a learning-based algorithm that can be trained using a small amount of data and, therefore, scalable to tasks with expensive simulations.
arXiv Detail & Related papers (2023-11-27T19:02:43Z) - Score-Based Methods for Discrete Optimization in Deep Learning [30.446056972242616]
We investigate a score-based approximation framework to solve such problems.
We experimentally demonstrate, in adversarial set classification tasks, that our method achieves a superior trade-off in terms of speed and solution quality compared to methods.
arXiv Detail & Related papers (2023-10-15T17:14:17Z) - Speeding-up Evolutionary Algorithms to solve Black-Box Optimization
Problems [0.0]
Population-based evolutionary algorithms are often considered when approaching computationally expensive black-box optimization problems.
We propose a technique capable of choosing an appropriate approximate function cost during the execution of the optimization algorithm.
The proposal finds the minimum evaluation cost at which the solutions are still properly ranked, and consequently, more evaluations can be computed in the same amount of time with minimal accuracy loss.
arXiv Detail & Related papers (2023-09-23T11:54:46Z) - Towards Understanding and Improving GFlowNet Training [71.85707593318297]
We introduce an efficient evaluation strategy to compare the learned sampling distribution to the target reward distribution.
We propose prioritized replay training of high-reward $x$, relative edge flow policy parametrization, and a novel guided trajectory balance objective.
arXiv Detail & Related papers (2023-05-11T22:50:41Z) - Maximum-Likelihood Inverse Reinforcement Learning with Finite-Time
Guarantees [56.848265937921354]
Inverse reinforcement learning (IRL) aims to recover the reward function and the associated optimal policy.
Many algorithms for IRL have an inherently nested structure.
We develop a novel single-loop algorithm for IRL that does not compromise reward estimation accuracy.
arXiv Detail & Related papers (2022-10-04T17:13:45Z) - Trajectory-based Algorithm Selection with Warm-starting [2.3823600586675724]
We study the quality and accuracy of performance regression and algorithm selection models in the scenario of predicting different algorithm performances.
We show promising performance of the trajectory-based per-run algorithm selection with warm-starting.
arXiv Detail & Related papers (2022-04-13T14:00:55Z) - Minimax Optimization with Smooth Algorithmic Adversaries [59.47122537182611]
We propose a new algorithm for the min-player against smooth algorithms deployed by an adversary.
Our algorithm is guaranteed to make monotonic progress having no limit cycles, and to find an appropriate number of gradient ascents.
arXiv Detail & Related papers (2021-06-02T22:03:36Z) - Online Model Selection for Reinforcement Learning with Function
Approximation [50.008542459050155]
We present a meta-algorithm that adapts to the optimal complexity with $tildeO(L5/6 T2/3)$ regret.
We also show that the meta-algorithm automatically admits significantly improved instance-dependent regret bounds.
arXiv Detail & Related papers (2020-11-19T10:00:54Z) - Stochastic Optimization Forests [60.523606291705214]
We show how to train forest decision policies by growing trees that choose splits to directly optimize the downstream decision quality, rather than splitting to improve prediction accuracy as in the standard random forest algorithm.
We show that our approximate splitting criteria can reduce running time hundredfold, while achieving performance close to forest algorithms that exactly re-optimize for every candidate split.
arXiv Detail & Related papers (2020-08-17T16:56:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.