ParaDime: A Framework for Parametric Dimensionality Reduction
- URL: http://arxiv.org/abs/2210.04582v3
- Date: Tue, 30 May 2023 14:33:32 GMT
- Title: ParaDime: A Framework for Parametric Dimensionality Reduction
- Authors: Andreas Hinterreiter and Christina Humer and Bernhard Kainz and Marc
Streit
- Abstract summary: ParaDime is a framework for parametric dimensionality reduction (DR)
It unifies parametric versions of DR techniques such as metric MDS, t-SNE, and UMAP.
It allows users to fully customize all aspects of the DR process.
- Score: 4.928716468981609
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: ParaDime is a framework for parametric dimensionality reduction (DR). In
parametric DR, neural networks are trained to embed high-dimensional data items
in a low-dimensional space while minimizing an objective function. ParaDime
builds on the idea that the objective functions of several modern DR techniques
result from transformed inter-item relationships. It provides a common
interface for specifying these relations and transformations and for defining
how they are used within the losses that govern the training process. Through
this interface, ParaDime unifies parametric versions of DR techniques such as
metric MDS, t-SNE, and UMAP. It allows users to fully customize all aspects of
the DR process. We show how this ease of customization makes ParaDime suitable
for experimenting with interesting techniques such as hybrid
classification/embedding models and supervised DR. This way, ParaDime opens up
new possibilities for visualizing high-dimensional data.
Related papers
- Depth Completion as Parameter-Efficient Test-Time Adaptation [66.72360181325877]
CAPA is a parameter-efficient test-time optimization framework that adapts pre-trained 3D foundation models (FMs) for depth completion.<n>For videos, CAPA introduces sequence-level parameter sharing, jointly adapting all frames to exploit temporal correlations, improve robustness, and enforce multi-frame consistency.
arXiv Detail & Related papers (2026-02-16T13:53:23Z) - MEPT: Mixture of Expert Prompt Tuning as a Manifold Mapper [75.6582687942241]
We propose Mixture of Expert Prompt Tuning (MEPT) as an effective and efficient manifold-mapping framework.<n>MEPT integrates multiple prompt experts to adaptively learn diverse and non-stationary data distributions.<n> Empirical evaluations demonstrate that MEPT outperforms several state-of-the-art parameter efficient baselines on SuperGLUE.
arXiv Detail & Related papers (2025-08-31T21:19:25Z) - Generalized Tensor-based Parameter-Efficient Fine-Tuning via Lie Group Transformations [50.010924231754856]
Adapting pre-trained foundation models for diverse downstream tasks is a core practice in artificial intelligence.
To overcome this, parameter-efficient fine-tuning (PEFT) methods like LoRA have emerged and are becoming a growing research focus.
We propose a generalization that extends matrix-based PEFT methods to higher-dimensional parameter spaces without compromising their structural properties.
arXiv Detail & Related papers (2025-04-01T14:36:45Z) - ALoRE: Efficient Visual Adaptation via Aggregating Low Rank Experts [71.91042186338163]
ALoRE is a novel PETL method that reuses the hypercomplex parameterized space constructed by Kronecker product to Aggregate Low Rank Experts.
Thanks to the artful design, ALoRE maintains negligible extra parameters and can be effortlessly merged into the frozen backbone.
arXiv Detail & Related papers (2024-12-11T12:31:30Z) - DMT-HI: MOE-based Hyperbolic Interpretable Deep Manifold Transformation for Unspervised Dimensionality Reduction [47.4136073281818]
Dimensionality reduction (DR) plays a crucial role in various fields, including data engineering and visualization.
The challenge of balancing DR accuracy and interpretability remains crucial, particularly for users dealing with high-dimensional data.
This work introduces the MOE-based Hyperbolic Interpretable Deep Manifold Transformation (DMT-HI)
arXiv Detail & Related papers (2024-10-25T12:11:32Z) - Hyperboloid GPLVM for Discovering Continuous Hierarchies via Nonparametric Estimation [41.13597666007784]
Dimensionality reduction (DR) offers a useful representation of complex high-dimensional data.
Recent DR methods focus on hyperbolic geometry to derive a faithful low-dimensional representation of hierarchical data.
This paper presents hGP-LVMs to embed high-dimensional hierarchical data with implicit continuity via nonparametric estimation.
arXiv Detail & Related papers (2024-10-22T05:07:30Z) - Order-Preserving Dimension Reduction for Multimodal Semantic Embedding [0.8695396732128153]
Order-Preserving Dimension Reduction aims to reduce the dimensionality of embeddings while preserving the ranking of KNN in the lower-dimensional space.<n>We have integrated OPDR with multiple state-of-the-art dimension-reduction techniques, distance functions, and embedding models.<n>Experiments on a variety of multimodal datasets demonstrate that OPDR effectively retains recall high accuracy while significantly reducing computational costs.
arXiv Detail & Related papers (2024-08-15T22:30:44Z) - Nonparametric Control Koopman Operators [3.9393118740111084]
This paper presents a novel Koopman (composition) operator representation framework for control systems in reproducing kernel Hilbert spaces (RKHSs) that is free of explicit dictionary or input parametrizations.
By establishing fundamental equivalences between different model representations, we are able to close the gap of control system operator learning and infinite-dimensional regression.
arXiv Detail & Related papers (2024-05-12T15:46:52Z) - Distributional Reduction: Unifying Dimensionality Reduction and Clustering with Gromov-Wasserstein [56.62376364594194]
Unsupervised learning aims to capture the underlying structure of potentially large and high-dimensional datasets.
In this work, we revisit these approaches under the lens of optimal transport and exhibit relationships with the Gromov-Wasserstein problem.
This unveils a new general framework, called distributional reduction, that recovers DR and clustering as special cases and allows addressing them jointly within a single optimization problem.
arXiv Detail & Related papers (2024-02-03T19:00:19Z) - Parameter Efficient Fine-tuning via Cross Block Orchestration for Segment Anything Model [81.55141188169621]
We equip PEFT with a cross-block orchestration mechanism to enable the adaptation of the Segment Anything Model (SAM) to various downstream scenarios.
We propose an intra-block enhancement module, which introduces a linear projection head whose weights are generated from a hyper-complex layer.
Our proposed approach consistently improves the segmentation performance significantly on novel scenarios with only around 1K additional parameters.
arXiv Detail & Related papers (2023-11-28T11:23:34Z) - Dimensionality Reduction as Probabilistic Inference [10.714603218784175]
Dimensionality reduction (DR) algorithms compress high-dimensional data into a lower dimensional representation while preserving important features of the data.
We introduce the ProbDR variational framework, which interprets a wide range of classical DR algorithms as probabilistic inference algorithms in this framework.
arXiv Detail & Related papers (2023-04-15T23:48:59Z) - Deep Metric Learning for Unsupervised Remote Sensing Change Detection [60.89777029184023]
Remote Sensing Change Detection (RS-CD) aims to detect relevant changes from Multi-Temporal Remote Sensing Images (MT-RSIs)
The performance of existing RS-CD methods is attributed to training on large annotated datasets.
This paper proposes an unsupervised CD method based on deep metric learning that can deal with both of these issues.
arXiv Detail & Related papers (2023-03-16T17:52:45Z) - Weakly But Deeply Supervised Occlusion-Reasoned Parametric Layouts [87.370534321618]
We propose an end-to-end network that takes a single perspective RGB image of a complex road scene as input, to produce occlusion-reasoned layouts in perspective space.
The only human annotations required by our method are for parametric attributes that are cheaper and less ambiguous to obtain.
We validate our approach on two public datasets, KITTI and NuScenes, to achieve state-of-the-art results with considerably lower human supervision.
arXiv Detail & Related papers (2021-04-14T09:32:29Z) - Invertible Manifold Learning for Dimension Reduction [44.16432765844299]
Dimension reduction (DR) aims to learn low-dimensional representations of high-dimensional data with the preservation of essential information.
We propose a novel two-stage DR method, called invertible manifold learning (inv-ML) to bridge the gap between theoretical information-lossless and practical DR.
Experiments are conducted on seven datasets with a neural network implementation of inv-ML, called i-ML-Enc.
arXiv Detail & Related papers (2020-10-07T14:22:51Z) - Perplexity-free Parametric t-SNE [11.970023029249083]
The t-distributed Neighbor Embedding (t-SNE) algorithm is a ubiquitously employed dimensionality reduction (DR) method.
It is however bounded to a user-defined perplexity parameter, restricting its DR quality compared to recently developed multi-scale perplexity-free approaches.
This paper hence proposes a multi-scale parametric t-SNE scheme, relieved from the perplexity tuning and with a deep neural network implementing the mapping.
arXiv Detail & Related papers (2020-10-03T13:47:01Z) - Augmented Parallel-Pyramid Net for Attention Guided Pose-Estimation [90.28365183660438]
This paper proposes an augmented parallel-pyramid net with attention partial module and differentiable auto-data augmentation.
We define a new pose search space where the sequences of data augmentations are formulated as a trainable and operational CNN component.
Notably, our method achieves the top-1 accuracy on the challenging COCO keypoint benchmark and the state-of-the-art results on the MPII datasets.
arXiv Detail & Related papers (2020-03-17T03:52:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.