New Metrics for Assessing Projection Pursuit Indexes, and Guiding Optimisation Choices
- URL: http://arxiv.org/abs/2407.13663v2
- Date: Mon, 14 Oct 2024 02:22:51 GMT
- Title: New Metrics for Assessing Projection Pursuit Indexes, and Guiding Optimisation Choices
- Authors: H. Sherry Zhang, Dianne Cook, Nicolas Langrené, Jessica Wai Yin Leung,
- Abstract summary: The projection pursuit (PP) guided tour interactively optimises a criterion function known as the PP index, to explore high-dimensional data by revealing interesting projections.
optimisation of some PP indexes can be non-trivial, if they are non-smooth functions, or the optimum has a small "squint angle", detectable only from close proximity.
This study investigates the performance of a recently introduced swarm-based algorithm, Jellyfish Search Optimiser (JSO), for optimising PP indexes.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The projection pursuit (PP) guided tour interactively optimises a criterion function known as the PP index, to explore high-dimensional data by revealing interesting projections. Optimisation of some PP indexes can be non-trivial, if they are non-smooth functions, or the optimum has a small "squint angle", detectable only from close proximity. To address these challenges, this study investigates the performance of a recently introduced swarm-based algorithm, Jellyfish Search Optimiser (JSO), for optimising PP indexes. The performance of JSO for visualising data is evaluated across various hyper-parameter settings and compared with existing optimisers. Additionally, methods for calculating the smoothness and squintability properties of the PP index are proposed. They are used to assess the optimiser performance in the presence of PP index complexities. A simulation study illustrates the use of these performance metrics to compare the JSO with existing optimisation methods available for the guided tour. The JSO algorithm has been implemented in the R package, `tourr`, and functions to calculate smoothness and squintability are available in the `ferrn` package.
Related papers
- PSO and the Traveling Salesman Problem: An Intelligent Optimization Approach [0.0]
The Traveling Salesman Problem (TSP) is an optimization problem that aims to find the shortest possible route that visits each city exactly once and returns to the starting point.
This paper explores the application of Particle Swarm Optimization (PSO), a population-based optimization algorithm, to solve TSP.
arXiv Detail & Related papers (2025-01-25T20:21:31Z) - Optimizing Posterior Samples for Bayesian Optimization via Rootfinding [2.94944680995069]
We introduce an efficient global optimization strategy for posterior samples based on global rootfinding.
Remarkably, even with just one point from each set, the global optimum is discovered most of the time.
Our approach also improves the performance of other posterior sample-based acquisition functions, such as variants of entropy search.
arXiv Detail & Related papers (2024-10-29T17:57:16Z) - Testing the Efficacy of Hyperparameter Optimization Algorithms in Short-Term Load Forecasting [0.0]
We use the Panama Electricity dataset to evaluate HPO algorithms' performances on a surrogate forecasting algorithm, XGBoost, in terms of accuracy (i.e., MAPE, $R2$) and runtime.
Results reveal significant runtime advantages for HPO algorithms over Random Search.
arXiv Detail & Related papers (2024-10-19T09:08:52Z) - Efficient Learning of POMDPs with Known Observation Model in Average-Reward Setting [56.92178753201331]
We propose the Observation-Aware Spectral (OAS) estimation technique, which enables the POMDP parameters to be learned from samples collected using a belief-based policy.
We show the consistency of the OAS procedure, and we prove a regret guarantee of order $mathcalO(sqrtT log(T)$ for the proposed OAS-UCRL algorithm.
arXiv Detail & Related papers (2024-10-02T08:46:34Z) - Discovering Preference Optimization Algorithms with and for Large Language Models [50.843710797024805]
offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs.
We perform objective discovery to automatically discover new state-of-the-art preference optimization algorithms without (expert) human intervention.
Experiments demonstrate the state-of-the-art performance of DiscoPOP, a novel algorithm that adaptively blends logistic and exponential losses.
arXiv Detail & Related papers (2024-06-12T16:58:41Z) - Poisson Process for Bayesian Optimization [126.51200593377739]
We propose a ranking-based surrogate model based on the Poisson process and introduce an efficient BO framework, namely Poisson Process Bayesian Optimization (PoPBO)
Compared to the classic GP-BO method, our PoPBO has lower costs and better robustness to noise, which is verified by abundant experiments.
arXiv Detail & Related papers (2024-02-05T02:54:50Z) - Bayesian multi-objective optimization for stochastic simulators: an
extension of the Pareto Active Learning method [0.0]
This article focuses on the multi-objective optimization of simulators with high output variance.
We rely on Bayesian optimization algorithms to make predictions about the functions to be optimized.
arXiv Detail & Related papers (2022-07-08T11:51:48Z) - Shapley-NAS: Discovering Operation Contribution for Neural Architecture
Search [96.20505710087392]
We propose a Shapley value based method to evaluate operation contribution (Shapley-NAS) for neural architecture search.
We show that our method outperforms the state-of-the-art methods by a considerable margin with light search cost.
arXiv Detail & Related papers (2022-06-20T14:41:49Z) - Probabilistic Permutation Graph Search: Black-Box Optimization for
Fairness in Ranking [53.94413894017409]
We present a novel way of representing permutation distributions, based on the notion of permutation graphs.
Similar to PL, our distribution representation, called PPG, can be used for black-box optimization of fairness.
arXiv Detail & Related papers (2022-04-28T20:38:34Z) - Bayesian Optimization over Permutation Spaces [30.650753803587794]
We propose and evaluate two algorithms for BO over Permutation Spaces (BOPS)
We theoretically analyze the performance of BOPS-T to show that their regret grows sub-linearly.
Our experiments on multiple synthetic and real-world benchmarks show that both BOPS-T and BOPS-H perform better than the state-of-the-art BO algorithm for spaces.
arXiv Detail & Related papers (2021-12-02T08:20:50Z) - Smooth-AP: Smoothing the Path Towards Large-Scale Image Retrieval [94.73459295405507]
Smooth-AP is a plug-and-play objective function that allows for end-to-end training of deep networks.
We apply Smooth-AP to standard retrieval benchmarks: Stanford Online products and VehicleID.
We also evaluate on larger-scale datasets: INaturalist for fine-grained category retrieval, VGGFace2 and IJB-C for face retrieval.
arXiv Detail & Related papers (2020-07-23T17:52:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.