Towards Unsupervised HPO for Outlier Detection
- URL: http://arxiv.org/abs/2208.11727v1
- Date: Wed, 24 Aug 2022 18:11:22 GMT
- Title: Towards Unsupervised HPO for Outlier Detection
- Authors: Yue Zhao, Leman Akoglu
- Abstract summary: We propose the first systematic approach called HPOD that is based on meta-learning.
HPOD capitalizes on the prior performance of a large collection of HPs on existing OD benchmark datasets.
It adapts (originally supervised) sequential model-based optimization to identify promising HPs efficiently.
- Score: 23.77292404327994
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given an unsupervised outlier detection (OD) algorithm, how can we optimize
its hyperparameter(s) (HP) on a new dataset, without any labels? In this work,
we address this challenging hyperparameter optimization for unsupervised OD
problem, and propose the first systematic approach called HPOD that is based on
meta-learning. HPOD capitalizes on the prior performance of a large collection
of HPs on existing OD benchmark datasets, and transfers this information to
enable HP evaluation on a new dataset without labels. Moreover, HPOD adapts
(originally supervised) sequential model-based optimization to identify
promising HPs efficiently. Extensive experiments show that HPOD works with both
deep (e.g., Robust AutoEncoder) and shallow (e.g., Local Outlier Factor (LOF)
and Isolation Forest (iForest)) OD algorithms on both discrete and continuous
HP spaces, and outperforms a wide range of baselines with on average 58% and
66% performance improvement over the default HPs of LOF and iForest.
Related papers
- Hierarchical Preference Optimization: Learning to achieve goals via feasible subgoals prediction [71.81851971324187]
This work introduces Hierarchical Preference Optimization (HPO), a novel approach to hierarchical reinforcement learning (HRL)
HPO addresses non-stationarity and infeasible subgoal generation issues when solving complex robotic control tasks.
Experiments on challenging robotic navigation and manipulation tasks demonstrate impressive performance of HPO, where it shows an improvement of up to 35% over the baselines.
arXiv Detail & Related papers (2024-11-01T04:58:40Z) - Fast Unsupervised Deep Outlier Model Selection with Hypernetworks [32.15262629879272]
We introduce HYPER for tuning DOD models, tackling two fundamental challenges: validation without supervision, and efficient search of the HP/model space.
A key idea is to design and train a novel hypernetwork (HN) that maps HPs onto optimal weights of the main DOD model.
In turn, HYPER capitalizes on a single HN that can dynamically generate weights for many DOD models.
arXiv Detail & Related papers (2023-07-20T02:07:20Z) - PriorBand: Practical Hyperparameter Optimization in the Age of Deep
Learning [49.92394599459274]
We propose PriorBand, an HPO algorithm tailored to Deep Learning (DL) pipelines.
We show its robustness across a range of DL benchmarks and show its gains under informative expert input and against poor expert beliefs.
arXiv Detail & Related papers (2023-06-21T16:26:14Z) - Does Deep Active Learning Work in the Wild? [9.722499619824442]
Deep active learning (DAL) methods have shown significant improvements in sample efficiency compared to simple random sampling.
Here, we argue that in real-world settings, or in the wild, there is significant uncertainty regarding good HPs.
We evaluate the performance of eleven modern DAL methods on eight benchmark problems.
arXiv Detail & Related papers (2023-01-31T20:58:08Z) - Hyperparameter Sensitivity in Deep Outlier Detection: Analysis and a
Scalable Hyper-Ensemble Solution [21.130842136324528]
We conduct the first large-scale analysis on the HP sensitivity of deep OD methods.
We design a HP-robust and scalable deep hyper-ensemble model called ROBOD that assembles models with varying HP configurations.
arXiv Detail & Related papers (2022-06-15T16:46:00Z) - FedHPO-B: A Benchmark Suite for Federated Hyperparameter Optimization [50.12374973760274]
We propose and implement a benchmark suite FedHPO-B that incorporates comprehensive FL tasks, enables efficient function evaluations, and eases continuing extensions.
We also conduct extensive experiments based on FedHPO-B to benchmark a few HPO methods.
arXiv Detail & Related papers (2022-06-08T15:29:10Z) - RoMA: Robust Model Adaptation for Offline Model-based Optimization [115.02677045518692]
We consider the problem of searching an input maximizing a black-box objective function given a static dataset of input-output queries.
A popular approach to solving this problem is maintaining a proxy model that approximates the true objective function.
Here, the main challenge is how to avoid adversarially optimized inputs during the search.
arXiv Detail & Related papers (2021-10-27T05:37:12Z) - Hyperparameter Optimization: Foundations, Algorithms, Best Practices and
Open Challenges [5.139260825952818]
This paper reviews important HPO methods such as grid or random search, evolutionary algorithms, Bayesian optimization, Hyperband and racing.
It gives practical recommendations regarding important choices to be made when conducting HPO, including the HPO algorithms themselves, performance evaluation, how to combine HPO with ML pipelines, runtime improvements, and parallelization.
arXiv Detail & Related papers (2021-07-13T04:55:47Z) - Cost-Efficient Online Hyperparameter Optimization [94.60924644778558]
We propose an online HPO algorithm that reaches human expert-level performance within a single run of the experiment.
Our proposed online HPO algorithm reaches human expert-level performance within a single run of the experiment, while incurring only modest computational overhead compared to regular training.
arXiv Detail & Related papers (2021-01-17T04:55:30Z) - Practical and sample efficient zero-shot HPO [8.41866793161234]
We provide an overview of available approaches and introduce two novel techniques to handle the problem.
The first is based on a surrogate model and adaptively chooses pairs of dataset, configuration to query.
The second is for settings where finding, tuning and testing a surrogate model is problematic, is a multi-fidelity technique combining HyperBand with submodular optimization.
arXiv Detail & Related papers (2020-07-27T08:56:55Z) - HyperSTAR: Task-Aware Hyperparameters for Deep Networks [52.50861379908611]
HyperSTAR is a task-aware method to warm-start HPO for deep neural networks.
It learns a dataset (task) representation along with the performance predictor directly from raw images.
It evaluates 50% less configurations to achieve the best performance compared to existing methods.
arXiv Detail & Related papers (2020-05-21T08:56:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.