Atlas Based Representation and Metric Learning on Manifolds
- URL: http://arxiv.org/abs/2106.07062v1
- Date: Sun, 13 Jun 2021 18:05:46 GMT
- Title: Atlas Based Representation and Metric Learning on Manifolds
- Authors: Eric O. Korman
- Abstract summary: We explore the use of a topological manifold, represented as a collection of charts, as the target space of neural network based representation learning tasks.
This is achieved by a simple adjustment to the output of an encoder's network architecture plus the addition of a maximal mean discrepancy (MMD) based loss function for regularization.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We explore the use of a topological manifold, represented as a collection of
charts, as the target space of neural network based representation learning
tasks. This is achieved by a simple adjustment to the output of an encoder's
network architecture plus the addition of a maximal mean discrepancy (MMD)
based loss function for regularization. Most algorithms in representation and
metric learning are easily adaptable to our framework and we demonstrate its
effectiveness by adjusting SimCLR (for representation learning) and standard
triplet loss training (for metric learning) to have manifold encoding spaces.
Our experiments show that we obtain a substantial performance boost over the
baseline for low dimensional encodings. In the case of triplet training, we
also find, independent of the manifold setup, that the MMD loss alone (i.e.
keeping a flat, euclidean target space but using an MMD loss to regularize it)
increases performance over the baseline in the typical, high-dimensional
Euclidean target spaces. Code for reproducing experiments is provided at
https://github.com/ekorman/neurve .
Related papers
- Gradient Boosting Mapping for Dimensionality Reduction and Feature Extraction [2.778647101651566]
A fundamental problem in supervised learning is to find a good set of features or distance measures.
We propose a supervised dimensionality reduction method, where the outputs of weak learners define the embedding.
We show that the embedding coordinates provide better features for the supervised learning task.
arXiv Detail & Related papers (2024-05-14T10:23:57Z) - Class Anchor Margin Loss for Content-Based Image Retrieval [97.81742911657497]
We propose a novel repeller-attractor loss that falls in the metric learning paradigm, yet directly optimize for the L2 metric without the need of generating pairs.
We evaluate the proposed objective in the context of few-shot and full-set training on the CBIR task, by using both convolutional and transformer architectures.
arXiv Detail & Related papers (2023-06-01T12:53:10Z) - Improving Deep Representation Learning via Auxiliary Learnable Target Coding [69.79343510578877]
This paper introduces a novel learnable target coding as an auxiliary regularization of deep representation learning.
Specifically, a margin-based triplet loss and a correlation consistency loss on the proposed target codes are designed to encourage more discriminative representations.
arXiv Detail & Related papers (2023-05-30T01:38:54Z) - State Representation Learning Using an Unbalanced Atlas [8.938418994111716]
This paper introduces a novel learning paradigm using an unbalanced atlas (UA), capable of surpassing state-of-the-art self-supervised learning approaches.
The efficacy of DIM-UA is demonstrated through training and evaluation on the Atari Annotated RAM Interface benchmark.
arXiv Detail & Related papers (2023-05-17T14:58:58Z) - Federated Representation Learning via Maximal Coding Rate Reduction [109.26332878050374]
We propose a methodology to learn low-dimensional representations from a dataset that is distributed among several clients.
Our proposed method, which we refer to as FLOW, utilizes MCR2 as the objective of choice, hence resulting in representations that are both between-class discriminative and within-class compressible.
arXiv Detail & Related papers (2022-10-01T15:43:51Z) - Contrastive Neighborhood Alignment [81.65103777329874]
We present Contrastive Neighborhood Alignment (CNA), a manifold learning approach to maintain the topology of learned features.
The target model aims to mimic the local structure of the source representation space using a contrastive loss.
CNA is illustrated in three scenarios: manifold learning, where the model maintains the local topology of the original data in a dimension-reduced space; model distillation, where a small student model is trained to mimic a larger teacher; and legacy model update, where an older model is replaced by a more powerful one.
arXiv Detail & Related papers (2022-01-06T04:58:31Z) - Dissecting Supervised Constrastive Learning [24.984074794337157]
Minimizing cross-entropy over the softmax scores of a linear map composed with a high-capacity encoder is arguably the most popular choice for training neural networks on supervised learning tasks.
We show that one can directly optimize the encoder instead, to obtain equally (or even more) discriminative representations via a supervised variant of a contrastive objective.
arXiv Detail & Related papers (2021-02-17T15:22:38Z) - LiteDepthwiseNet: An Extreme Lightweight Network for Hyperspectral Image
Classification [9.571458051525768]
This paper proposes a new network architecture, LiteDepthwiseNet, for hyperspectral image (HSI) classification.
LiteDepthwiseNet decomposes standard convolution into depthwise convolution and pointwise convolution, which can achieve high classification performance with minimal parameters.
Experiment results on three benchmark hyperspectral datasets show that LiteDepthwiseNet achieves state-of-the-art performance with a very small number of parameters and low computational cost.
arXiv Detail & Related papers (2020-10-15T13:12:17Z) - MetricUNet: Synergistic Image- and Voxel-Level Learning for Precise CT
Prostate Segmentation via Online Sampling [66.01558025094333]
We propose a two-stage framework, with the first stage to quickly localize the prostate region and the second stage to precisely segment the prostate.
We introduce a novel online metric learning module through voxel-wise sampling in the multi-task network.
Our method can effectively learn more representative voxel-level features compared with the conventional learning methods with cross-entropy or Dice loss.
arXiv Detail & Related papers (2020-05-15T10:37:02Z) - Scan-based Semantic Segmentation of LiDAR Point Clouds: An Experimental
Study [2.6205925938720833]
State of the art methods use deep neural networks to predict semantic classes for each point in a LiDAR scan.
A powerful and efficient way to process LiDAR measurements is to use two-dimensional, image-like projections.
We demonstrate various techniques to boost the performance and to improve runtime as well as memory constraints.
arXiv Detail & Related papers (2020-04-06T11:08:12Z) - Learning to Hash with Graph Neural Networks for Recommender Systems [103.82479899868191]
Graph representation learning has attracted much attention in supporting high quality candidate search at scale.
Despite its effectiveness in learning embedding vectors for objects in the user-item interaction network, the computational costs to infer users' preferences in continuous embedding space are tremendous.
We propose a simple yet effective discrete representation learning framework to jointly learn continuous and discrete codes.
arXiv Detail & Related papers (2020-03-04T06:59:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.