Hodge-Aware Contrastive Learning
- URL: http://arxiv.org/abs/2309.07364v1
- Date: Thu, 14 Sep 2023 00:40:07 GMT
- Title: Hodge-Aware Contrastive Learning
- Authors: Alexander M\"ollers, Alexander Immer, Vincent Fortuin, Elvin Isufi
- Abstract summary: Simplicial complexes prove effective in modeling data with multiway dependencies.
We develop a contrastive self-supervised learning approach for processing simplicial data.
- Score: 101.56637264703058
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Simplicial complexes prove effective in modeling data with multiway
dependencies, such as data defined along the edges of networks or within other
higher-order structures. Their spectrum can be decomposed into three
interpretable subspaces via the Hodge decomposition, resulting foundational in
numerous applications. We leverage this decomposition to develop a contrastive
self-supervised learning approach for processing simplicial data and generating
embeddings that encapsulate specific spectral information.Specifically, we
encode the pertinent data invariances through simplicial neural networks and
devise augmentations that yield positive contrastive examples with suitable
spectral properties for downstream tasks. Additionally, we reweight the
significance of negative examples in the contrastive loss, considering the
similarity of their Hodge components to the anchor. By encouraging a stronger
separation among less similar instances, we obtain an embedding space that
reflects the spectral properties of the data. The numerical results on two
standard edge flow classification tasks show a superior performance even when
compared to supervised learning techniques. Our findings underscore the
importance of adopting a spectral perspective for contrastive learning with
higher-order data.
Related papers
- Datacube segmentation via Deep Spectral Clustering [76.48544221010424]
Extended Vision techniques often pose a challenge in their interpretation.
The huge dimensionality of data cube spectra poses a complex task in its statistical interpretation.
In this paper, we explore the possibility of applying unsupervised clustering methods in encoded space.
A statistical dimensional reduction is performed by an ad hoc trained (Variational) AutoEncoder, while the clustering process is performed by a (learnable) iterative K-Means clustering algorithm.
arXiv Detail & Related papers (2024-01-31T09:31:28Z) - DiffSpectralNet : Unveiling the Potential of Diffusion Models for
Hyperspectral Image Classification [6.521187080027966]
We propose a new network called DiffSpectralNet, which combines diffusion and transformer techniques.
First, we use an unsupervised learning framework based on the diffusion model to extract both high-level and low-level spectral-spatial features.
The diffusion method is capable of extracting diverse and meaningful spectral-spatial features, leading to improvement in HSI classification.
arXiv Detail & Related papers (2023-10-29T15:26:37Z) - Learning Curves for Noisy Heterogeneous Feature-Subsampled Ridge
Ensembles [34.32021888691789]
We develop a theory of feature-bagging in noisy least-squares ridge ensembles.
We demonstrate that subsampling shifts the double-descent peak of a linear predictor.
We compare the performance of a feature-subsampling ensemble to a single linear predictor.
arXiv Detail & Related papers (2023-07-06T17:56:06Z) - Linking data separation, visual separation, and classifier performance
using pseudo-labeling by contrastive learning [125.99533416395765]
We argue that the performance of the final classifier depends on the data separation present in the latent space and visual separation present in the projection.
We demonstrate our results by the classification of five real-world challenging image datasets of human intestinal parasites with only 1% supervised samples.
arXiv Detail & Related papers (2023-02-06T10:01:38Z) - Deep Semi-supervised Learning with Double-Contrast of Features and
Semantics [2.2230089845369094]
This paper proposes an end-to-end deep semi-supervised learning double contrast of semantic and feature.
We leverage information theory to explain the rationality of double contrast of semantics and features.
arXiv Detail & Related papers (2022-11-28T09:08:19Z) - Dataset Distillation via Factorization [58.8114016318593]
We introduce a emphdataset factorization approach, termed emphHaBa, which is a plug-and-play strategy portable to any existing dataset distillation (DD) baseline.
emphHaBa explores decomposing a dataset into two components: data emphHallucination networks and emphBases.
Our method can yield significant improvement on downstream classification tasks compared with previous state of the arts, while reducing the total number of compressed parameters by up to 65%.
arXiv Detail & Related papers (2022-10-30T08:36:19Z) - Correlation between Alignment-Uniformity and Performance of Dense
Contrastive Representations [11.266613717084788]
We analyze the theoretical ideas of dense contrastive learning using a standard CNN and straightforward feature matching scheme.
We discover the core principle in constructing a positive pair of dense features and empirically proved its validity.
Also, we introduce a new scalar metric that summarizes the correlation between alignment-and-uniformity and downstream performance.
arXiv Detail & Related papers (2022-10-17T08:08:37Z) - Spectral Analysis Network for Deep Representation Learning and Image
Clustering [53.415803942270685]
This paper proposes a new network structure for unsupervised deep representation learning based on spectral analysis.
It can identify the local similarities among images in patch level and thus more robust against occlusion.
It can learn more clustering-friendly representations and is capable to reveal the deep correlations among data samples.
arXiv Detail & Related papers (2020-09-11T05:07:15Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z) - Improving Deep Hyperspectral Image Classification Performance with
Spectral Unmixing [3.84448093764973]
We propose an abundance-based multi-HSI classification method.
We convert every HSI from the spectral domain to the abundance domain by a dataset-specific autoencoder.
Secondly, the abundance representations from multiple HSIs are collected to form an enlarged dataset.
arXiv Detail & Related papers (2020-04-01T17:14:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.