TwinTURBO: Semi-Supervised Fine-Tuning of Foundation Models via Mutual Information Decompositions for Downstream Task and Latent Spaces
- URL: http://arxiv.org/abs/2503.07851v1
- Date: Mon, 10 Mar 2025 20:56:54 GMT
- Title: TwinTURBO: Semi-Supervised Fine-Tuning of Foundation Models via Mutual Information Decompositions for Downstream Task and Latent Spaces
- Authors: Guillaume Quétant, Pavlo Molchanov, Slava Voloshynovskiy,
- Abstract summary: We present a semi-supervised fine-tuning framework to address the challenges of training for a limited amount of labelled data.<n> Experiments on several datasets demonstrate significant improvements in classification tasks under extremely low-labelled conditions.
- Score: 10.86297454943578
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a semi-supervised fine-tuning framework for foundation models that utilises mutual information decomposition to address the challenges of training for a limited amount of labelled data. Our approach derives two distinct lower bounds: i) for the downstream task space, such as classification, optimised using conditional and marginal cross-entropy alongside Kullback-Leibler divergence, and ii) for the latent space representation, regularised and aligned using a contrastive-like decomposition. This fine-tuning strategy retains the pre-trained structure of the foundation model, modifying only a specialised projector module comprising a small transformer and a token aggregation technique. Experiments on several datasets demonstrate significant improvements in classification tasks under extremely low-labelled conditions by effectively leveraging unlabelled data.
Related papers
- Semi-Supervised Fine-Tuning of Vision Foundation Models with Content-Style Decomposition [4.192370959537781]
We present a semi-supervised fine-tuning approach designed to improve the performance of pre-trained foundation models on downstream tasks with limited labeled data.
We evaluate our approach on multiple datasets, including MNIST, its augmented variations, CIFAR-10, SVHN, and GalaxyMNIST.
arXiv Detail & Related papers (2024-10-02T22:36:12Z) - Enabling Tensor Decomposition for Time-Series Classification via A Simple Pseudo-Laplacian Contrast [26.28414569796961]
We propose a novel Pseudo Laplacian Contrast (PLC) tensor decomposition framework.
It integrates the data augmentation and cross-view Laplacian to enable the extraction of class-aware representations.
Experiments on various datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-09-23T16:48:13Z) - Distributional Reduction: Unifying Dimensionality Reduction and Clustering with Gromov-Wasserstein [56.62376364594194]
Unsupervised learning aims to capture the underlying structure of potentially large and high-dimensional datasets.
In this work, we revisit these approaches under the lens of optimal transport and exhibit relationships with the Gromov-Wasserstein problem.
This unveils a new general framework, called distributional reduction, that recovers DR and clustering as special cases and allows addressing them jointly within a single optimization problem.
arXiv Detail & Related papers (2024-02-03T19:00:19Z) - Mode-wise Principal Subspace Pursuit and Matrix Spiked Covariance Model [13.082805815235975]
We introduce a novel framework called Mode-wise Principal Subspace Pursuit (MOP-UP) to extract hidden variations in both the row and column dimensions for matrix data.
The effectiveness and practical merits of the proposed framework are demonstrated through experiments on both simulated and real datasets.
arXiv Detail & Related papers (2023-07-02T13:59:47Z) - Protein Design with Guided Discrete Diffusion [67.06148688398677]
A popular approach to protein design is to combine a generative model with a discriminative model for conditional sampling.
We propose diffusioN Optimized Sampling (NOS), a guidance method for discrete diffusion models.
NOS makes it possible to perform design directly in sequence space, circumventing significant limitations of structure-based methods.
arXiv Detail & Related papers (2023-05-31T16:31:24Z) - Multi-View Clustering via Semi-non-negative Tensor Factorization [120.87318230985653]
We develop a novel multi-view clustering based on semi-non-negative tensor factorization (Semi-NTF)
Our model directly considers the between-view relationship and exploits the between-view complementary information.
In addition, we provide an optimization algorithm for the proposed method and prove mathematically that the algorithm always converges to the stationary KKT point.
arXiv Detail & Related papers (2023-03-29T14:54:19Z) - Adversarial Lagrangian Integrated Contrastive Embedding for Limited Size
Datasets [8.926248371832852]
This study presents a novel adversarial Lagrangian integrated contrastive embedding (ALICE) method for small-sized datasets.
The accuracy improvement and training convergence of the proposed pre-trained adversarial transfer are shown.
A novel adversarial integrated contrastive model using various augmentation techniques is investigated.
arXiv Detail & Related papers (2022-10-06T23:59:28Z) - Few Shot Generative Model Adaption via Relaxed Spatial Structural
Alignment [130.84010267004803]
Training a generative adversarial network (GAN) with limited data has been a challenging task.
A feasible solution is to start with a GAN well-trained on a large scale source domain and adapt it to the target domain with a few samples, termed as few shot generative model adaption.
We propose a relaxed spatial structural alignment method to calibrate the target generative models during the adaption.
arXiv Detail & Related papers (2022-03-06T14:26:25Z) - Supervised Contrastive Learning for Pre-trained Language Model
Fine-tuning [23.00300794016583]
State-of-the-art natural language understanding classification models follow two-stages.
We propose a supervised contrastive learning (SCL) objective for the fine-tuning stage.
Our proposed fine-tuning objective leads to models that are more robust to different levels of noise in the fine-tuning training data.
arXiv Detail & Related papers (2020-11-03T01:10:39Z) - Model Fusion with Kullback--Leibler Divergence [58.20269014662046]
We propose a method to fuse posterior distributions learned from heterogeneous datasets.
Our algorithm relies on a mean field assumption for both the fused model and the individual dataset posteriors.
arXiv Detail & Related papers (2020-07-13T03:27:45Z) - Ensemble Model with Batch Spectral Regularization and Data Blending for
Cross-Domain Few-Shot Learning with Unlabeled Data [75.94147344921355]
We build a multi-branch ensemble framework by using diverse feature transformation matrices.
We propose a data blending method to exploit the unlabeled data and augment the sparse support set in the target domain.
arXiv Detail & Related papers (2020-06-08T02:27:34Z) - Deep Semantic Matching with Foreground Detection and Cycle-Consistency [103.22976097225457]
We address weakly supervised semantic matching based on a deep network.
We explicitly estimate the foreground regions to suppress the effect of background clutter.
We develop cycle-consistent losses to enforce the predicted transformations across multiple images to be geometrically plausible and consistent.
arXiv Detail & Related papers (2020-03-31T22:38:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.