HyperNP: Interactive Visual Exploration of Multidimensional Projection
Hyperparameters
- URL: http://arxiv.org/abs/2106.13777v1
- Date: Fri, 25 Jun 2021 17:28:14 GMT
- Title: HyperNP: Interactive Visual Exploration of Multidimensional Projection
Hyperparameters
- Authors: Gabriel Appleby, Mateus Espadoto, Rui Chen, Samuel Goree, Alexandru
Telea, Erik W Anderson, Remco Chang
- Abstract summary: HyperNP is a scalable method that allows for real-time interactive exploration of projection methods by training neural network approximations.
We evaluate the performance of the HyperNP across three datasets in terms of performance and speed.
- Score: 61.354362652006834
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Projection algorithms such as t-SNE or UMAP are useful for the visualization
of high dimensional data, but depend on hyperparameters which must be tuned
carefully. Unfortunately, iteratively recomputing projections to find the
optimal hyperparameter value is computationally intensive and unintuitive due
to the stochastic nature of these methods. In this paper we propose HyperNP, a
scalable method that allows for real-time interactive hyperparameter
exploration of projection methods by training neural network approximations.
HyperNP can be trained on a fraction of the total data instances and
hyperparameter configurations and can compute projections for new data and
hyperparameters at interactive speeds. HyperNP is compact in size and fast to
compute, thus allowing it to be embedded in lightweight visualization systems
such as web browsers. We evaluate the performance of the HyperNP across three
datasets in terms of performance and speed. The results suggest that HyperNP is
accurate, scalable, interactive, and appropriate for use in real-world
settings.
Related papers
- Efficient Hyperparameter Importance Assessment for CNNs [1.7778609937758323]
This paper aims to quantify the importance weights of some hyperparameters in Convolutional Neural Networks (CNNs) with an algorithm called N-RReliefF.
We conduct an extensive study by training over ten thousand CNN models across ten popular image classification datasets.
arXiv Detail & Related papers (2024-10-11T15:47:46Z) - AUTOMATA: Gradient Based Data Subset Selection for Compute-Efficient
Hyper-parameter Tuning [72.54359545547904]
We propose a gradient-based subset selection framework for hyper- parameter tuning.
We show that using gradient-based data subsets for hyper- parameter tuning achieves significantly faster turnaround times and speedups of 3$times$-30$times$.
arXiv Detail & Related papers (2022-03-15T19:25:01Z) - Towards Robust and Automatic Hyper-Parameter Tunning [39.04604349338802]
We introduce a new class of HPO method and explore how the low-rank factorization of intermediate layers of a convolutional network can be used to define an analytical response surface.
We quantify how this surface behaves as a surrogate to model performance and can be solved using a trust-region search algorithm, which we call autoHyper.
arXiv Detail & Related papers (2021-11-28T05:27:34Z) - Optimizing Large-Scale Hyperparameters via Automated Learning Algorithm [97.66038345864095]
We propose a new hyperparameter optimization method with zeroth-order hyper-gradients (HOZOG)
Specifically, we first formulate hyperparameter optimization as an A-based constrained optimization problem.
Then, we use the average zeroth-order hyper-gradients to update hyper parameters.
arXiv Detail & Related papers (2021-02-17T21:03:05Z) - Online hyperparameter optimization by real-time recurrent learning [57.01871583756586]
Our framework takes advantage of the analogy between hyperparameter optimization and parameter learning in neural networks (RNNs)
It adapts a well-studied family of online learning algorithms for RNNs to tune hyperparameters and network parameters simultaneously.
This procedure yields systematically better generalization performance compared to standard methods, at a fraction of wallclock time.
arXiv Detail & Related papers (2021-02-15T19:36:18Z) - HyperMorph: Amortized Hyperparameter Learning for Image Registration [8.13669868327082]
HyperMorph is a learning-based strategy for deformable image registration.
We show that it can be used to optimize multiple hyper parameters considerably faster than existing search strategies.
arXiv Detail & Related papers (2021-01-04T15:39:16Z) - How much progress have we made in neural network training? A New
Evaluation Protocol for Benchmarking Optimizers [86.36020260204302]
We propose a new benchmarking protocol to evaluate both end-to-end efficiency and data-addition training efficiency.
A human study is conducted to show that our evaluation protocol matches human tuning behavior better than the random search.
We then apply the proposed benchmarking framework to 7s and various tasks, including computer vision, natural language processing, reinforcement learning, and graph mining.
arXiv Detail & Related papers (2020-10-19T21:46:39Z) - HyperSTAR: Task-Aware Hyperparameters for Deep Networks [52.50861379908611]
HyperSTAR is a task-aware method to warm-start HPO for deep neural networks.
It learns a dataset (task) representation along with the performance predictor directly from raw images.
It evaluates 50% less configurations to achieve the best performance compared to existing methods.
arXiv Detail & Related papers (2020-05-21T08:56:50Z) - Weighted Random Search for CNN Hyperparameter Optimization [0.0]
We introduce the weighted Random Search (WRS) method, a combination of Random Search (RS) and probabilistic greedy.
The criterion is the classification accuracy achieved within the same number of tested combinations of hyperparameter values.
According to our experiments, the WRS algorithm outperforms the other methods.
arXiv Detail & Related papers (2020-03-30T09:40:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.