Optimised one-class classification performance
- URL: http://arxiv.org/abs/2102.02618v1
- Date: Thu, 4 Feb 2021 14:08:20 GMT
- Title: Optimised one-class classification performance
- Authors: Oliver Urs Lenz, Daniel Peralta, Chris Cornelis
- Abstract summary: We treat optimisation of three data descriptors: Support Vector Machine (SVM), Nearest Neighbour Distance (NND) and Average Localised Proximity (ALP)
We experimentally evaluate the effect of hyper parameter optimisation with 246 classification problems drawn from 50 datasets.
- Score: 4.894976692426517
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We provide a thorough treatment of hyperparameter optimisation for three data
descriptors with a good track-record in the literature: Support Vector Machine
(SVM), Nearest Neighbour Distance (NND) and Average Localised Proximity (ALP).
The hyperparameters of SVM have to be optimised through cross-validation, while
NND and ALP allow the reuse of a single nearest-neighbour query and an
efficient form of leave-one-out validation. We experimentally evaluate the
effect of hyperparameter optimisation with 246 classification problems drawn
from 50 datasets. From a selection of optimisation algorithms, the recent
Malherbe-Powell proposal optimises the hyperparameters of all three data
descriptors most efficiently. We calculate the increase in test AUROC and the
amount of overfitting as a function of the number of hyperparameter
evaluations. After 50 evaluations, ALP and SVM both significantly outperform
NND. The performance of ALP and SVM is comparable, but ALP can be optimised
more efficiently, while a choice between ALP and SVM based on validation AUROC
gives the best overall result. This distils the many variables of one-class
classification with hyperparameter optimisation down to a clear choice with a
known trade-off, allowing practitioners to make informed decisions.
Related papers
- Adaptive Preference Scaling for Reinforcement Learning with Human Feedback [103.36048042664768]
Reinforcement learning from human feedback (RLHF) is a prevalent approach to align AI systems with human values.
We propose a novel adaptive preference loss, underpinned by distributionally robust optimization (DRO)
Our method is versatile and can be readily adapted to various preference optimization frameworks.
arXiv Detail & Related papers (2024-06-04T20:33:22Z) - BO4IO: A Bayesian optimization approach to inverse optimization with uncertainty quantification [5.031974232392534]
This work addresses data-driven inverse optimization (IO)
The goal is to estimate unknown parameters in an optimization model from observed decisions that can be assumed to be optimal or near-optimal.
arXiv Detail & Related papers (2024-05-28T06:52:17Z) - Poisson Process for Bayesian Optimization [126.51200593377739]
We propose a ranking-based surrogate model based on the Poisson process and introduce an efficient BO framework, namely Poisson Process Bayesian Optimization (PoPBO)
Compared to the classic GP-BO method, our PoPBO has lower costs and better robustness to noise, which is verified by abundant experiments.
arXiv Detail & Related papers (2024-02-05T02:54:50Z) - Interactive Hyperparameter Optimization in Multi-Objective Problems via
Preference Learning [65.51668094117802]
We propose a human-centered interactive HPO approach tailored towards multi-objective machine learning (ML)
Instead of relying on the user guessing the most suitable indicator for their needs, our approach automatically learns an appropriate indicator.
arXiv Detail & Related papers (2023-09-07T09:22:05Z) - Optimization of Annealed Importance Sampling Hyperparameters [77.34726150561087]
Annealed Importance Sampling (AIS) is a popular algorithm used to estimates the intractable marginal likelihood of deep generative models.
We present a parameteric AIS process with flexible intermediary distributions and optimize the bridging distributions to use fewer number of steps for sampling.
We assess the performance of our optimized AIS for marginal likelihood estimation of deep generative models and compare it to other estimators.
arXiv Detail & Related papers (2022-09-27T07:58:25Z) - Multi-objective hyperparameter optimization with performance uncertainty [62.997667081978825]
This paper presents results on multi-objective hyperparameter optimization with uncertainty on the evaluation of Machine Learning algorithms.
We combine the sampling strategy of Tree-structured Parzen Estimators (TPE) with the metamodel obtained after training a Gaussian Process Regression (GPR) with heterogeneous noise.
Experimental results on three analytical test functions and three ML problems show the improvement over multi-objective TPE and GPR.
arXiv Detail & Related papers (2022-09-09T14:58:43Z) - AUTOMATA: Gradient Based Data Subset Selection for Compute-Efficient
Hyper-parameter Tuning [72.54359545547904]
We propose a gradient-based subset selection framework for hyper- parameter tuning.
We show that using gradient-based data subsets for hyper- parameter tuning achieves significantly faster turnaround times and speedups of 3$times$-30$times$.
arXiv Detail & Related papers (2022-03-15T19:25:01Z) - Robust Multi-Objective Bayesian Optimization Under Input Noise [27.603887040015888]
In many manufacturing processes, the design parameters are subject to random input noise, resulting in a product that is often less performant than expected.
In this work, we propose the first multi-objective BO method that is robust to input noise.
arXiv Detail & Related papers (2022-02-15T16:33:48Z) - Average Localised Proximity: a new data descriptor with good default
one-class classification performance [4.894976692426517]
One-class classification is a challenging subfield of machine learning.
Data descriptors are used to predict membership of a class based solely on positive examples of that class.
arXiv Detail & Related papers (2021-01-26T19:14:14Z) - Multi-Objective Hyperparameter Tuning and Feature Selection using Filter
Ensembles [0.8029049649310213]
We treat feature selection as a multi-objective optimization task.
First uses multi-objective model-based optimization.
Second is an evolutionary NSGA-II-based wrapper approach to feature selection.
arXiv Detail & Related papers (2019-12-30T13:04:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.