Fair and Green Hyperparameter Optimization via Multi-objective and
Multiple Information Source Bayesian Optimization
- URL: http://arxiv.org/abs/2205.08835v1
- Date: Wed, 18 May 2022 10:07:21 GMT
- Title: Fair and Green Hyperparameter Optimization via Multi-objective and
Multiple Information Source Bayesian Optimization
- Authors: Antonio Candelieri, Andrea Ponti, Francesco Archetti
- Abstract summary: FanG-HPO uses subsets of the large dataset (aka information sources) to obtain cheap approximations of both accuracy and fairness.
Experiments consider two benchmark (fairness) datasets and two machine learning algorithms.
- Score: 0.19116784879310028
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There is a consensus that focusing only on accuracy in searching for optimal
machine learning models amplifies biases contained in the data, leading to
unfair predictions and decision supports. Recently, multi-objective
hyperparameter optimization has been proposed to search for machine learning
models which offer equally Pareto-efficient trade-offs between accuracy and
fairness. Although these approaches proved to be more versatile than
fairness-aware machine learning algorithms -- which optimize accuracy
constrained to some threshold on fairness -- they could drastically increase
the energy consumption in the case of large datasets. In this paper we propose
FanG-HPO, a Fair and Green Hyperparameter Optimization (HPO) approach based on
both multi-objective and multiple information source Bayesian optimization.
FanG-HPO uses subsets of the large dataset (aka information sources) to obtain
cheap approximations of both accuracy and fairness, and multi-objective
Bayesian Optimization to efficiently identify Pareto-efficient machine learning
models. Experiments consider two benchmark (fairness) datasets and two machine
learning algorithms (XGBoost and Multi-Layer Perceptron), and provide an
assessment of FanG-HPO against both fairness-aware machine learning algorithms
and hyperparameter optimization via a multi-objective single-source
optimization algorithm in BoTorch, a state-of-the-art platform for Bayesian
Optimization.
Related papers
- End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Interactive Hyperparameter Optimization in Multi-Objective Problems via
Preference Learning [65.51668094117802]
We propose a human-centered interactive HPO approach tailored towards multi-objective machine learning (ML)
Instead of relying on the user guessing the most suitable indicator for their needs, our approach automatically learns an appropriate indicator.
arXiv Detail & Related papers (2023-09-07T09:22:05Z) - HyperTuner: A Cross-Layer Multi-Objective Hyperparameter Auto-Tuning
Framework for Data Analytic Services [25.889791254011794]
We propose HyperTuner to execute cross-layer multi-objective hyperparameter auto-tuning.
We show that HyperTuner is superior in both convergence and diversity compared with the other four baseline algorithms.
experiments with different training datasets, different optimization objectives and different machine learning platforms verify that HyperTuner can well adapt to various data analytic service scenarios.
arXiv Detail & Related papers (2023-04-20T02:19:10Z) - Multi-objective hyperparameter optimization with performance uncertainty [62.997667081978825]
This paper presents results on multi-objective hyperparameter optimization with uncertainty on the evaluation of Machine Learning algorithms.
We combine the sampling strategy of Tree-structured Parzen Estimators (TPE) with the metamodel obtained after training a Gaussian Process Regression (GPR) with heterogeneous noise.
Experimental results on three analytical test functions and three ML problems show the improvement over multi-objective TPE and GPR.
arXiv Detail & Related papers (2022-09-09T14:58:43Z) - Enhancing Explainability of Hyperparameter Optimization via Bayesian
Algorithm Execution [13.037647287689438]
We study the combination of HPO with interpretable machine learning (IML) methods such as partial dependence plots.
We propose a modified HPO method which efficiently searches for optimum global predictive performance.
Our method returns more reliable explanations of the underlying black-box without a loss of optimization performance.
arXiv Detail & Related papers (2022-06-11T07:12:04Z) - Towards Learning Universal Hyperparameter Optimizers with Transformers [57.35920571605559]
We introduce the OptFormer, the first text-based Transformer HPO framework that provides a universal end-to-end interface for jointly learning policy and function prediction.
Our experiments demonstrate that the OptFormer can imitate at least 7 different HPO algorithms, which can be further improved via its function uncertainty estimates.
arXiv Detail & Related papers (2022-05-26T12:51:32Z) - A survey on multi-objective hyperparameter optimization algorithms for
Machine Learning [62.997667081978825]
This article presents a systematic survey of the literature published between 2014 and 2020 on multi-objective HPO algorithms.
We distinguish between metaheuristic-based algorithms, metamodel-based algorithms, and approaches using a mixture of both.
We also discuss the quality metrics used to compare multi-objective HPO procedures and present future research directions.
arXiv Detail & Related papers (2021-11-23T10:22:30Z) - Multi-Fidelity Multi-Objective Bayesian Optimization: An Output Space
Entropy Search Approach [44.25245545568633]
We study the novel problem of blackbox optimization of multiple objectives via multi-fidelity function evaluations.
Our experiments on several synthetic and real-world benchmark problems show that MF-OSEMO, with both approximations, significantly improves over the state-of-the-art single-fidelity algorithms.
arXiv Detail & Related papers (2020-11-02T06:59:04Z) - Bayesian Optimization for Selecting Efficient Machine Learning Models [53.202224677485525]
We present a unified Bayesian Optimization framework for jointly optimizing models for both prediction effectiveness and training efficiency.
Experiments on model selection for recommendation tasks indicate models selected this way significantly improves model training efficiency.
arXiv Detail & Related papers (2020-08-02T02:56:30Z) - Resource Aware Multifidelity Active Learning for Efficient Optimization [0.8717253904965373]
This paper introduces the Resource Aware Active Learning (RAAL) strategy to accelerate the optimization of black box functions.
The RAAL strategy optimally seeds multiple points at each allowing for a major speed up of the optimization task.
arXiv Detail & Related papers (2020-07-09T10:01:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.