Bayesian Optimization Augmented with Actively Elicited Expert Knowledge
- URL: http://arxiv.org/abs/2208.08742v1
- Date: Thu, 18 Aug 2022 09:49:21 GMT
- Title: Bayesian Optimization Augmented with Actively Elicited Expert Knowledge
- Authors: Daolang Huang, Louis Filstroff, Petrus Mikkola, Runkai Zheng, Samuel
Kaski
- Abstract summary: We tackle the problem of incorporating expert knowledge into BO, with the goal of further accelerating the optimization.
We design a multi-task learning architecture for this task, with the goal of jointly eliciting the expert knowledge and minimizing the objective function.
Experiments on various benchmark functions with both simulated and actual human experts show that the proposed method significantly speeds up BO even when the expert knowledge is biased.
- Score: 13.551210295284733
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayesian optimization (BO) is a well-established method to optimize black-box
functions whose direct evaluations are costly. In this paper, we tackle the
problem of incorporating expert knowledge into BO, with the goal of further
accelerating the optimization, which has received very little attention so far.
We design a multi-task learning architecture for this task, with the goal of
jointly eliciting the expert knowledge and minimizing the objective function.
In particular, this allows for the expert knowledge to be transferred into the
BO task. We introduce a specific architecture based on Siamese neural networks
to handle the knowledge elicitation from pairwise queries. Experiments on
various benchmark functions with both simulated and actual human experts show
that the proposed method significantly speeds up BO even when the expert
knowledge is biased compared to the objective function.
Related papers
- KBAlign: Efficient Self Adaptation on Specific Knowledge Bases [75.78948575957081]
Large language models (LLMs) usually rely on retrieval-augmented generation to exploit knowledge materials in an instant manner.
We propose KBAlign, an approach designed for efficient adaptation to downstream tasks involving knowledge bases.
Our method utilizes iterative training with self-annotated data such as Q&A pairs and revision suggestions, enabling the model to grasp the knowledge content efficiently.
arXiv Detail & Related papers (2024-11-22T08:21:03Z) - TeamLoRA: Boosting Low-Rank Adaptation with Expert Collaboration and Competition [61.91764883512776]
We introduce an innovative PEFT method, TeamLoRA, consisting of a collaboration and competition module for experts.
By doing so, TeamLoRA connects the experts as a "Team" with internal collaboration and competition, enabling a faster and more accurate PEFT paradigm for multi-task learning.
arXiv Detail & Related papers (2024-08-19T09:58:53Z) - Diversifying the Expert Knowledge for Task-Agnostic Pruning in Sparse Mixture-of-Experts [75.85448576746373]
We propose a method of grouping and pruning similar experts to improve the model's parameter efficiency.
We validate the effectiveness of our method by pruning three state-of-the-art MoE architectures.
The evaluation shows that our method outperforms other model pruning methods on a range of natural language tasks.
arXiv Detail & Related papers (2024-07-12T17:25:02Z) - Human-Algorithm Collaborative Bayesian Optimization for Engineering Systems [0.0]
We re-introduce the human back into the data-driven decision making loop by outlining an approach for collaborative Bayesian optimization.
Our methodology exploits the hypothesis that humans are more efficient at making discrete choices rather than continuous ones.
We demonstrate our approach across a number of applied and numerical case studies including bioprocess optimization and reactor geometry design.
arXiv Detail & Related papers (2024-04-16T23:17:04Z) - Enhanced Bayesian Optimization via Preferential Modeling of Abstract
Properties [49.351577714596544]
We propose a human-AI collaborative Bayesian framework to incorporate expert preferences about unmeasured abstract properties into surrogate modeling.
We provide an efficient strategy that can also handle any incorrect/misleading expert bias in preferential judgments.
arXiv Detail & Related papers (2024-02-27T09:23:13Z) - Domain Knowledge Injection in Bayesian Search for New Materials [0.0]
We propose DKIBO, a Bayesian optimization (BO) algorithm that accommodates domain knowledge to tune exploration in the search space.
We empirically demonstrate the practical utility of the proposed method by successfully injecting domain knowledge in a materials design task.
arXiv Detail & Related papers (2023-11-26T01:55:55Z) - PromptAgent: Strategic Planning with Language Models Enables
Expert-level Prompt Optimization [60.00631098364391]
PromptAgent is an optimization method that crafts expert-level prompts equivalent in quality to those handcrafted by experts.
Inspired by human-like trial-and-error exploration, PromptAgent induces precise expert-level insights and in-depth instructions.
We apply PromptAgent to 12 tasks spanning three practical domains.
arXiv Detail & Related papers (2023-10-25T07:47:01Z) - Incorporating Expert Prior Knowledge into Experimental Design via
Posterior Sampling [58.56638141701966]
Experimenters can often acquire the knowledge about the location of the global optimum.
It is unknown how to incorporate the expert prior knowledge about the global optimum into Bayesian optimization.
An efficient Bayesian optimization approach has been proposed via posterior sampling on the posterior distribution of the global optimum.
arXiv Detail & Related papers (2020-02-26T01:57:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.