MISO-wiLDCosts: Multi Information Source Optimization with Location
Dependent Costs
- URL: http://arxiv.org/abs/2102.04951v1
- Date: Tue, 9 Feb 2021 17:04:17 GMT
- Title: MISO-wiLDCosts: Multi Information Source Optimization with Location
Dependent Costs
- Authors: Antonio Candelieri, Francesco Archetti
- Abstract summary: This paper addresses black-box optimization over multiple information sources whose both fidelity and query cost change over the search space, that is they are location dependent.
The approach uses: (i) an Augmented Gaussian Process, recently proposed in multi-information source optimization as a single model of the objective function over search space and sources, and (ii) a Gaussian Process to model the location-dependent cost of each source.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper addresses black-box optimization over multiple information sources
whose both fidelity and query cost change over the search space, that is they
are location dependent. The approach uses: (i) an Augmented Gaussian Process,
recently proposed in multi-information source optimization as a single model of
the objective function over search space and sources, and (ii) a Gaussian
Process to model the location-dependent cost of each source. The former is used
into a Confidence Bound based acquisition function to select the next source
and location to query, while the latter is used to penalize the value of the
acquisition depending on the expected query cost for any source-location pair.
The proposed approach is evaluated on a set of Hyperparameters Optimization
tasks, consisting of two Machine Learning classifiers and three datasets of
different sizes.
Related papers
- SourceSplice: Source Selection for Machine Learning Tasks [3.3916160303055563]
Data quality plays a pivotal role in the predictive performance of machine learning (ML) tasks.<n>This paper addresses the problem of determining the best subset of data sources that must be combined to construct the underlying training dataset.<n>We propose SourceGrasp and SourceSplice, frameworks designed to efficiently select a suitable subset of sources.
arXiv Detail & Related papers (2025-07-29T19:29:52Z) - MUSS: Multilevel Subset Selection for Relevance and Diversity [4.8254343133177295]
In recommender systems, one is interested in selecting relevant items, while providing a diversified recommendation.<n>We present a novel theoretical approach for analyzing this type of problems, and show that our method achieves a constant factor approximation of the optimal objective.
arXiv Detail & Related papers (2025-03-14T06:37:17Z) - Self-Steering Optimization: Autonomous Preference Optimization for Large Language Models [79.84205827056907]
We present Self-Steering Optimization ($SSO$), an algorithm that autonomously generates high-quality preference data.<n>$SSO$ employs a specialized optimization objective to build a data generator from the policy model itself, which is used to produce accurate and on-policy data.<n>Our evaluation shows that $SSO$ consistently outperforms baselines in human preference alignment and reward optimization.
arXiv Detail & Related papers (2024-10-22T16:04:03Z) - Training Greedy Policy for Proposal Batch Selection in Expensive Multi-Objective Combinatorial Optimization [52.80408805368928]
We introduce a novel greedy-style subset selection algorithm for batch acquisition.
Our experiments on the red fluorescent proteins show that our proposed method achieves the baseline performance in 1.69x fewer queries.
arXiv Detail & Related papers (2024-06-21T05:57:08Z) - An adaptive approach to Bayesian Optimization with switching costs [0.8246494848934447]
We investigate modifications to Bayesian Optimization for a resource-constrained setting of sequential experimental design.
We adapt two process-constrained batch algorithms to this sequential problem formulation, and propose two new methods: one cost-aware and one cost-ignorant.
arXiv Detail & Related papers (2024-05-14T21:55:02Z) - Estimating Barycenters of Distributions with Neural Optimal Transport [93.28746685008093]
We propose a new scalable approach for solving the Wasserstein barycenter problem.
Our methodology is based on the recent Neural OT solver.
We also establish theoretical error bounds for our proposed approach.
arXiv Detail & Related papers (2024-02-06T09:17:07Z) - Optimal Data Selection: An Online Distributed View [61.31708750038692]
We develop algorithms for the online and distributed version of the problem.
We show that our selection methods outperform random selection by $5-20%$.
In learning tasks on ImageNet and MNIST, we show that our selection methods outperform random selection by $5-20%$.
arXiv Detail & Related papers (2022-01-25T18:56:16Z) - Learning a Large Neighborhood Search Algorithm for Mixed Integer
Programs [6.084888301899142]
We consider a learning-based LNS approach for mixed integer programs (MIPs)
We train a Neural Diving model to represent a probability distribution over assignments, which, together with an off-the-shelf MIP solver, generates an initial assignment.
We train a Neural Neighborhood Selection policy to select a search neighborhood at each step, which is searched using a MIP solver to find the next assignment.
arXiv Detail & Related papers (2021-07-21T16:43:46Z) - Bayesian Algorithm Execution: Estimating Computable Properties of
Black-box Functions Using Mutual Information [78.78486761923855]
In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations.
We present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output.
On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm.
arXiv Detail & Related papers (2021-04-19T17:22:11Z) - Multi-Fidelity Multi-Objective Bayesian Optimization: An Output Space
Entropy Search Approach [44.25245545568633]
We study the novel problem of blackbox optimization of multiple objectives via multi-fidelity function evaluations.
Our experiments on several synthetic and real-world benchmark problems show that MF-OSEMO, with both approximations, significantly improves over the state-of-the-art single-fidelity algorithms.
arXiv Detail & Related papers (2020-11-02T06:59:04Z) - Information-Theoretic Multi-Objective Bayesian Optimization with
Continuous Approximations [44.25245545568633]
We propose information-Theoretic Multi-Objective Bayesian Optimization with Continuous Approximations (iMOCA) to solve this problem.
Our experiments on diverse synthetic and real-world benchmarks show that iMOCA significantly improves over existing single-fidelity methods.
arXiv Detail & Related papers (2020-09-12T01:46:03Z) - Resource Allocation via Model-Free Deep Learning in Free Space Optical
Communications [119.81868223344173]
The paper investigates the general problem of resource allocation for mitigating channel fading effects in Free Space Optical (FSO) communications.
Under this framework, we propose two algorithms that solve FSO resource allocation problems.
arXiv Detail & Related papers (2020-07-27T17:38:51Z) - Green Machine Learning via Augmented Gaussian Processes and
Multi-Information Source Optimization [0.19116784879310028]
Strategy to drastically reduce computational time and energy consumed is to exploit the availability of different information sources.
An Augmented Gaussian Process method exploiting multiple information sources (namely, AGP-MISO) is proposed.
A novel acquisition function is defined according to the Augmented Gaussian Process.
arXiv Detail & Related papers (2020-06-25T08:04:48Z) - Incorporating Expert Prior in Bayesian Optimisation via Space Warping [54.412024556499254]
In big search spaces the algorithm goes through several low function value regions before reaching the optimum of the function.
One approach to subside this cold start phase is to use prior knowledge that can accelerate the optimisation.
In this paper, we represent the prior knowledge about the function optimum through a prior distribution.
The prior distribution is then used to warp the search space in such a way that space gets expanded around the high probability region of function optimum and shrinks around low probability region of optimum.
arXiv Detail & Related papers (2020-03-27T06:18:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.