Perplexity-free Parametric t-SNE
- URL: http://arxiv.org/abs/2010.01359v1
- Date: Sat, 3 Oct 2020 13:47:01 GMT
- Title: Perplexity-free Parametric t-SNE
- Authors: Francesco Crecchi, Cyril de Bodt, Michel Verleysen, John A. Lee and
Davide Bacciu
- Abstract summary: The t-distributed Neighbor Embedding (t-SNE) algorithm is a ubiquitously employed dimensionality reduction (DR) method.
It is however bounded to a user-defined perplexity parameter, restricting its DR quality compared to recently developed multi-scale perplexity-free approaches.
This paper hence proposes a multi-scale parametric t-SNE scheme, relieved from the perplexity tuning and with a deep neural network implementing the mapping.
- Score: 11.970023029249083
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The t-distributed Stochastic Neighbor Embedding (t-SNE) algorithm is a
ubiquitously employed dimensionality reduction (DR) method. Its non-parametric
nature and impressive efficacy motivated its parametric extension. It is
however bounded to a user-defined perplexity parameter, restricting its DR
quality compared to recently developed multi-scale perplexity-free approaches.
This paper hence proposes a multi-scale parametric t-SNE scheme, relieved from
the perplexity tuning and with a deep neural network implementing the mapping.
It produces reliable embeddings with out-of-sample extensions, competitive with
the best perplexity adjustments in terms of neighborhood preservation on
multiple data sets.
Related papers
- A Unified Theory of Stochastic Proximal Point Methods without Smoothness [52.30944052987393]
Proximal point methods have attracted considerable interest owing to their numerical stability and robustness against imperfect tuning.
This paper presents a comprehensive analysis of a broad range of variations of the proximal point method (SPPM)
arXiv Detail & Related papers (2024-05-24T21:09:19Z) - A Metaheuristic for Amortized Search in High-Dimensional Parameter
Spaces [0.0]
We propose a new metaheuristic that drives dimensionality reductions from feature-informed transformations.
DR-FFIT implements an efficient sampling strategy that facilitates a gradient-free parameter search in high-dimensional spaces.
Our test data show that DR-FFIT boosts the performances of random-search and simulated-annealing against well-established metaheuristics.
arXiv Detail & Related papers (2023-09-28T14:25:14Z) - Online Continuous Hyperparameter Optimization for Generalized Linear Contextual Bandits [55.03293214439741]
In contextual bandits, an agent sequentially makes actions from a time-dependent action set based on past experience.
We propose the first online continuous hyperparameter tuning framework for contextual bandits.
We show that it could achieve a sublinear regret in theory and performs consistently better than all existing methods on both synthetic and real datasets.
arXiv Detail & Related papers (2023-02-18T23:31:20Z) - Making SGD Parameter-Free [28.088227276891885]
Our algorithm is conceptually simple, has high-probability guarantees, and is partially adaptive to unknown gradient norms, smoothness, and strong convexity.
At the heart of our results is a novel parameter-free certificate for SGD step size choice, and a time-uniform concentration result that assumes no a-priori bounds on SGD iterates.
arXiv Detail & Related papers (2022-05-04T16:29:38Z) - Federated Linear Contextual Bandits [17.438169791449834]
Fed-PE is proposed to cope with the heterogeneity across clients without exchanging local feature vectors or raw data.
Experiments demonstrate the effectiveness of the proposed algorithms on both synthetic and real-world datasets.
arXiv Detail & Related papers (2021-10-27T05:18:58Z) - Differentially Private Coordinate Descent for Composite Empirical Risk
Minimization [13.742100810492014]
Machine learning models can leak information about the data used to train them.
Differentially Private (DP) variants of optimization algorithms like Gradient Descent (DP-SGD) have been designed to mitigate this.
We propose a new method for composite Differentially Private Empirical Risk Minimization (DP-ERM): Differentially Private Coordinate Descent (DP-CD)
arXiv Detail & Related papers (2021-10-22T10:22:48Z) - Spectral Tensor Train Parameterization of Deep Learning Layers [136.4761580842396]
We study low-rank parameterizations of weight matrices with embedded spectral properties in the Deep Learning context.
We show the effects of neural network compression in the classification setting and both compression and improved stability training in the generative adversarial training setting.
arXiv Detail & Related papers (2021-03-07T00:15:44Z) - Adaptive Subcarrier, Parameter, and Power Allocation for Partitioned
Edge Learning Over Broadband Channels [69.18343801164741]
partitioned edge learning (PARTEL) implements parameter-server training, a well known distributed learning method, in wireless network.
We consider the case of deep neural network (DNN) models which can be trained using PARTEL by introducing some auxiliary variables.
arXiv Detail & Related papers (2020-10-08T15:27:50Z) - Supervised Learning for Non-Sequential Data: A Canonical Polyadic
Decomposition Approach [85.12934750565971]
Efficient modelling of feature interactions underpins supervised learning for non-sequential tasks.
To alleviate this issue, it has been proposed to implicitly represent the model parameters as a tensor.
For enhanced expressiveness, we generalize the framework to allow feature mapping to arbitrarily high-dimensional feature vectors.
arXiv Detail & Related papers (2020-01-27T22:38:40Z) - Support recovery and sup-norm convergence rates for sparse pivotal
estimation [79.13844065776928]
In high dimensional sparse regression, pivotal estimators are estimators for which the optimal regularization parameter is independent of the noise level.
We show minimax sup-norm convergence rates for non smoothed and smoothed, single task and multitask square-root Lasso-type estimators.
arXiv Detail & Related papers (2020-01-15T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.