Adversarial Lagrangian Integrated Contrastive Embedding for Limited Size
Datasets
- URL: http://arxiv.org/abs/2210.03261v1
- Date: Thu, 6 Oct 2022 23:59:28 GMT
- Title: Adversarial Lagrangian Integrated Contrastive Embedding for Limited Size
Datasets
- Authors: Amin Jalali and Minho Lee
- Abstract summary: This study presents a novel adversarial Lagrangian integrated contrastive embedding (ALICE) method for small-sized datasets.
The accuracy improvement and training convergence of the proposed pre-trained adversarial transfer are shown.
A novel adversarial integrated contrastive model using various augmentation techniques is investigated.
- Score: 8.926248371832852
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Certain datasets contain a limited number of samples with highly various
styles and complex structures. This study presents a novel adversarial
Lagrangian integrated contrastive embedding (ALICE) method for small-sized
datasets. First, the accuracy improvement and training convergence of the
proposed pre-trained adversarial transfer are shown on various subsets of
datasets with few samples. Second, a novel adversarial integrated contrastive
model using various augmentation techniques is investigated. The proposed
structure considers the input samples with different appearances and generates
a superior representation with adversarial transfer contrastive training.
Finally, multi-objective augmented Lagrangian multipliers encourage the
low-rank and sparsity of the presented adversarial contrastive embedding to
adaptively estimate the coefficients of the regularizers automatically to the
optimum weights. The sparsity constraint suppresses less representative
elements in the feature space. The low-rank constraint eliminates trivial and
redundant components and enables superior generalization. The performance of
the proposed model is verified by conducting ablation studies by using
benchmark datasets for scenarios with small data samples.
Related papers
- Downstream-Pretext Domain Knowledge Traceback for Active Learning [138.02530777915362]
We propose a downstream-pretext domain knowledge traceback (DOKT) method that traces the data interactions of downstream knowledge and pre-training guidance.
DOKT consists of a traceback diversity indicator and a domain-based uncertainty estimator.
Experiments conducted on ten datasets show that our model outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2024-07-20T01:34:13Z) - Distributional Reduction: Unifying Dimensionality Reduction and Clustering with Gromov-Wasserstein [56.62376364594194]
Unsupervised learning aims to capture the underlying structure of potentially large and high-dimensional datasets.
In this work, we revisit these approaches under the lens of optimal transport and exhibit relationships with the Gromov-Wasserstein problem.
This unveils a new general framework, called distributional reduction, that recovers DR and clustering as special cases and allows addressing them jointly within a single optimization problem.
arXiv Detail & Related papers (2024-02-03T19:00:19Z) - RGM: A Robust Generalizable Matching Model [49.60975442871967]
We propose a deep model for sparse and dense matching, termed RGM (Robust Generalist Matching)
To narrow the gap between synthetic training samples and real-world scenarios, we build a new, large-scale dataset with sparse correspondence ground truth.
We are able to mix up various dense and sparse matching datasets, significantly improving the training diversity.
arXiv Detail & Related papers (2023-10-18T07:30:08Z) - Tackling Diverse Minorities in Imbalanced Classification [80.78227787608714]
Imbalanced datasets are commonly observed in various real-world applications, presenting significant challenges in training classifiers.
We propose generating synthetic samples iteratively by mixing data samples from both minority and majority classes.
We demonstrate the effectiveness of our proposed framework through extensive experiments conducted on seven publicly available benchmark datasets.
arXiv Detail & Related papers (2023-08-28T18:48:34Z) - Enhancing Representation Learning on High-Dimensional, Small-Size
Tabular Data: A Divide and Conquer Method with Ensembled VAEs [7.923088041693465]
We present an ensemble of lightweight VAEs to learn posteriors over subsets of the feature-space, which get aggregated into a joint posterior in a novel divide-and-conquer approach.
We show that our approach is robust to partial features at inference, exhibiting little performance degradation even with most features missing.
arXiv Detail & Related papers (2023-06-27T17:55:31Z) - Imbalanced Classification via a Tabular Translation GAN [4.864819846886142]
We present a model based on Generative Adversarial Networks which uses additional regularization losses to map majority samples to corresponding synthetic minority samples.
We show that the proposed method improves average precision when compared to alternative re-weighting and oversampling techniques.
arXiv Detail & Related papers (2022-04-19T06:02:53Z) - CAFE: Learning to Condense Dataset by Aligning Features [72.99394941348757]
We propose a novel scheme to Condense dataset by Aligning FEatures (CAFE)
At the heart of our approach is an effective strategy to align features from the real and synthetic data across various scales.
We validate the proposed CAFE across various datasets, and demonstrate that it generally outperforms the state of the art.
arXiv Detail & Related papers (2022-03-03T05:58:49Z) - Heterogeneous Contrastive Learning [45.93509060683946]
We propose a unified heterogeneous learning framework, which combines weighted unsupervised contrastive loss and weighted supervised contrastive loss.
Experimental results on real-world data sets demonstrate the effectiveness and the efficiency of the proposed method.
arXiv Detail & Related papers (2021-05-19T21:01:41Z) - Unleashing the Power of Contrastive Self-Supervised Visual Models via
Contrast-Regularized Fine-Tuning [94.35586521144117]
We investigate whether applying contrastive learning to fine-tuning would bring further benefits.
We propose Contrast-regularized tuning (Core-tuning), a novel approach for fine-tuning contrastive self-supervised visual models.
arXiv Detail & Related papers (2021-02-12T16:31:24Z) - A Multi-criteria Approach for Fast and Outlier-aware Representative
Selection from Manifolds [1.5469452301122175]
MOSAIC is a novel representative selection approach from high-dimensional data that may exhibit non-linear structures.
Our method advances a multi-criteria selection approach that maximizes the global representation power of the sampled subset.
MOSAIC's superiority in achieving the desired characteristics of a representative subset all at once is demonstrated.
arXiv Detail & Related papers (2020-03-12T19:31:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.