Feature Extraction Framework based on Contrastive Learning with Adaptive
Positive and Negative Samples
- URL: http://arxiv.org/abs/2201.03942v1
- Date: Tue, 11 Jan 2022 13:34:03 GMT
- Title: Feature Extraction Framework based on Contrastive Learning with Adaptive
Positive and Negative Samples
- Authors: Hongjie Zhang
- Abstract summary: framework is suitable for unsupervised, supervised, and semi-supervised single-view feature extraction.
CL-FEFA constructs adaptively the positive and negative samples from the results of feature extraction.
CL-FEFA considers the mutual information between positive samples, that is, similar samples in potential structures, which provides theoretical support for its advantages in feature extraction.
- Score: 1.4467794332678539
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this study, we propose a feature extraction framework based on contrastive
learning with adaptive positive and negative samples (CL-FEFA) that is suitable
for unsupervised, supervised, and semi-supervised single-view feature
extraction. CL-FEFA constructs adaptively the positive and negative samples
from the results of feature extraction, which makes it more appropriate and
accurate. Thereafter, the discriminative features are re extracted to according
to InfoNCE loss based on previous positive and negative samples, which will
make the intra-class samples more compact and the inter-class samples more
dispersed. At the same time, using the potential structure information of
subspace samples to dynamically construct positive and negative samples can
make our framework more robust to noisy data. Furthermore, CL-FEFA considers
the mutual information between positive samples, that is, similar samples in
potential structures, which provides theoretical support for its advantages in
feature extraction. The final numerical experiments prove that the proposed
framework has a strong advantage over the traditional feature extraction
methods and contrastive learning methods.
Related papers
- Bayesian Estimate of Mean Proper Scores for Diversity-Enhanced Active
Learning [6.704927458661697]
Expected Loss Reduction (ELR) focuses on a Bayesian estimate of the reduction in classification error, and more general costs fit in the same framework.
We propose Bayesian Estimate of Mean Proper Scores (BEMPS) to estimate the increase in strictly proper scores.
We show that BEMPS yields robust acquisition functions and well-calibrated classifiers, and consistently outperforms the others tested.
arXiv Detail & Related papers (2023-12-15T11:02:17Z) - Siamese Representation Learning for Unsupervised Relation Extraction [5.776369192706107]
Unsupervised relation extraction (URE) aims at discovering underlying relations between named entity pairs from open-domain plain text.
Existing URE models utilizing contrastive learning, which attract positive samples and repulse negative samples to promote better separation, have got decent effect.
We propose Siamese Representation Learning for Unsupervised Relation Extraction -- a novel framework to simply leverage positive pairs to representation learning.
arXiv Detail & Related papers (2023-10-01T02:57:43Z) - Hodge-Aware Contrastive Learning [101.56637264703058]
Simplicial complexes prove effective in modeling data with multiway dependencies.
We develop a contrastive self-supervised learning approach for processing simplicial data.
arXiv Detail & Related papers (2023-09-14T00:40:07Z) - Rethinking Collaborative Metric Learning: Toward an Efficient
Alternative without Negative Sampling [156.7248383178991]
Collaborative Metric Learning (CML) paradigm has aroused wide interest in the area of recommendation systems (RS)
We find that negative sampling would lead to a biased estimation of the generalization error.
Motivated by this, we propose an efficient alternative without negative sampling for CML named textitSampling-Free Collaborative Metric Learning (SFCML)
arXiv Detail & Related papers (2022-06-23T08:50:22Z) - Self-Supervised Anomaly Detection by Self-Distillation and Negative
Sampling [1.304892050913381]
We show that self-distillation of the in-distribution training set together with contrasting against negative examples strongly improves OOD detection.
We observe that by leveraging negative samples, which keep the statistics of low-level features while changing the high-level semantics, higher average detection performance is obtained.
arXiv Detail & Related papers (2022-01-17T12:33:14Z) - Rethinking InfoNCE: How Many Negative Samples Do You Need? [54.146208195806636]
We study how many negative samples are optimal for InfoNCE in different scenarios via a semi-quantitative theoretical framework.
We estimate the optimal negative sampling ratio using the $K$ value that maximizes the training effectiveness function.
arXiv Detail & Related papers (2021-05-27T08:38:29Z) - Contrastive Attraction and Contrastive Repulsion for Representation
Learning [131.72147978462348]
Contrastive learning (CL) methods learn data representations in a self-supervision manner, where the encoder contrasts each positive sample over multiple negative samples.
Recent CL methods have achieved promising results when pretrained on large-scale datasets, such as ImageNet.
We propose a doubly CL strategy that separately compares positive and negative samples within their own groups, and then proceeds with a contrast between positive and negative groups.
arXiv Detail & Related papers (2021-05-08T17:25:08Z) - Doubly Contrastive Deep Clustering [135.7001508427597]
We present a novel Doubly Contrastive Deep Clustering (DCDC) framework, which constructs contrastive loss over both sample and class views.
Specifically, for the sample view, we set the class distribution of the original sample and its augmented version as positive sample pairs.
For the class view, we build the positive and negative pairs from the sample distribution of the class.
In this way, two contrastive losses successfully constrain the clustering results of mini-batch samples in both sample and class level.
arXiv Detail & Related papers (2021-03-09T15:15:32Z) - Conditional Negative Sampling for Contrastive Learning of Visual
Representations [19.136685699971864]
We show that choosing difficult negatives, or those more similar to the current instance, can yield stronger representations.
We introduce a family of mutual information estimators that sample negatives conditionally -- in a "ring" around each positive.
We prove that these estimators lower-bound mutual information, with higher bias but lower variance than NCE.
arXiv Detail & Related papers (2020-10-05T14:17:32Z) - Understanding Negative Sampling in Graph Representation Learning [87.35038268508414]
We show that negative sampling is as important as positive sampling in determining the optimization objective and the resulted variance.
We propose Metropolis-Hastings (MCNS) to approximate the positive distribution with self-contrast approximation and accelerate negative sampling by Metropolis-Hastings.
We evaluate our method on 5 datasets that cover extensive downstream graph learning tasks, including link prediction, node classification and personalized recommendation.
arXiv Detail & Related papers (2020-05-20T06:25:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.