Collaborative and Distributed Bayesian Optimization via Consensus:
Showcasing the Power of Collaboration for Optimal Design
- URL: http://arxiv.org/abs/2306.14348v2
- Date: Sat, 9 Mar 2024 23:37:40 GMT
- Title: Collaborative and Distributed Bayesian Optimization via Consensus:
Showcasing the Power of Collaboration for Optimal Design
- Authors: Xubo Yue, Raed Al Kontar, Albert S. Berahas, Yang Liu, Blake N.
Johnson
- Abstract summary: We propose a new collaborative paradigm for Bayesian optimization.
Our approach provides a generic and flexible framework that can incorporate different collaboration mechanisms.
We show that our framework can effectively accelerate and improve the optimal design process and benefit all participants.
- Score: 7.066023612489374
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Optimal design is a critical yet challenging task within many applications.
This challenge arises from the need for extensive trial and error, often done
through simulations or running field experiments. Fortunately, sequential
optimal design, also referred to as Bayesian optimization when using surrogates
with a Bayesian flavor, has played a key role in accelerating the design
process through efficient sequential sampling strategies. However, a key
opportunity exists nowadays. The increased connectivity of edge devices sets
forth a new collaborative paradigm for Bayesian optimization. A paradigm
whereby different clients collaboratively borrow strength from each other by
effectively distributing their experimentation efforts to improve and
fast-track their optimal design process. To this end, we bring the notion of
consensus to Bayesian optimization, where clients agree (i.e., reach a
consensus) on their next-to-sample designs. Our approach provides a generic and
flexible framework that can incorporate different collaboration mechanisms. In
lieu of this, we propose transitional collaborative mechanisms where clients
initially rely more on each other to maneuver through the early stages with
scant data, then, at the late stages, focus on their own objectives to get
client-specific solutions. Theoretically, we show the sub-linear growth in
regret for our proposed framework. Empirically, through simulated datasets and
a real-world collaborative sensor design experiment, we show that our framework
can effectively accelerate and improve the optimal design process and benefit
all participants.
Related papers
- Emulating Full Client Participation: A Long-Term Client Selection Strategy for Federated Learning [48.94952630292219]
We propose a novel client selection strategy designed to emulate the performance achieved with full client participation.
In a single round, we select clients by minimizing the gradient-space estimation error between the client subset and the full client set.
In multi-round selection, we introduce a novel individual fairness constraint, which ensures that clients with similar data distributions have similar frequencies of being selected.
arXiv Detail & Related papers (2024-05-22T12:27:24Z) - Enhanced Bayesian Optimization via Preferential Modeling of Abstract
Properties [49.351577714596544]
We propose a human-AI collaborative Bayesian framework to incorporate expert preferences about unmeasured abstract properties into surrogate modeling.
We provide an efficient strategy that can also handle any incorrect/misleading expert bias in preferential judgments.
arXiv Detail & Related papers (2024-02-27T09:23:13Z) - Sample-Efficient Co-Design of Robotic Agents Using Multi-fidelity
Training on Universal Policy Network [12.283890343327233]
We propose a multi-fidelity-based design exploration strategy based on Hyperband.
We tie the controllers learnt across the design spaces through a universal learner policy for warm-starting the subsequent controller learning problems.
Experiments performed on a wide range of agent design problems demonstrate the superiority of our method compared to the baselines.
arXiv Detail & Related papers (2023-09-08T02:54:31Z) - CO-BED: Information-Theoretic Contextual Optimization via Bayesian
Experimental Design [31.247108087199095]
CO-BED is a model-agnostic framework for designing contextual experiments using information-theoretic principles.
As a result, CO-BED provides a general and automated solution to a wide range of contextual optimization problems.
arXiv Detail & Related papers (2023-02-27T18:14:13Z) - Differentiable Multi-Target Causal Bayesian Experimental Design [43.76697029708785]
We introduce a gradient-based approach for the problem of Bayesian optimal experimental design to learn causal models in a batch setting.
Existing methods rely on greedy approximations to construct a batch of experiments.
We propose a conceptually simple end-to-end gradient-based optimization procedure to acquire a set of optimal intervention target-state pairs.
arXiv Detail & Related papers (2023-02-21T11:32:59Z) - Accelerated Federated Learning with Decoupled Adaptive Optimization [53.230515878096426]
federated learning (FL) framework enables clients to collaboratively learn a shared model while keeping privacy of training data on clients.
Recently, many iterations efforts have been made to generalize centralized adaptive optimization methods, such as SGDM, Adam, AdaGrad, etc., to federated settings.
This work aims to develop novel adaptive optimization methods for FL from the perspective of dynamics of ordinary differential equations (ODEs)
arXiv Detail & Related papers (2022-07-14T22:46:43Z) - Investigating Positive and Negative Qualities of Human-in-the-Loop
Optimization for Designing Interaction Techniques [55.492211642128446]
Designers reportedly struggle with design optimization tasks where they are asked to find a combination of design parameters that maximizes a given set of objectives.
Model-based computational design algorithms assist designers by generating design examples during design.
Black box methods for assistance, on the other hand, can work with any design problem.
arXiv Detail & Related papers (2022-04-15T20:40:43Z) - Optimizer Amalgamation [124.33523126363728]
We are motivated to study a new problem named Amalgamation: how can we best combine a pool of "teacher" amalgamations into a single "student" that can have stronger problem-specific performance?
First, we define three differentiable mechanisms to amalgamate a pool of analyticals by gradient descent.
In order to reduce variance of the process, we also explore methods to stabilize the process by perturbing the target.
arXiv Detail & Related papers (2022-03-12T16:07:57Z) - Incorporating Expert Prior Knowledge into Experimental Design via
Posterior Sampling [58.56638141701966]
Experimenters can often acquire the knowledge about the location of the global optimum.
It is unknown how to incorporate the expert prior knowledge about the global optimum into Bayesian optimization.
An efficient Bayesian optimization approach has been proposed via posterior sampling on the posterior distribution of the global optimum.
arXiv Detail & Related papers (2020-02-26T01:57:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.