Unlocking the Transferability of Tokens in Deep Models for Tabular Data
- URL: http://arxiv.org/abs/2310.15149v1
- Date: Mon, 23 Oct 2023 17:53:09 GMT
- Title: Unlocking the Transferability of Tokens in Deep Models for Tabular Data
- Authors: Qi-Le Zhou, Han-Jia Ye, Le-Ye Wang, De-Chuan Zhan
- Abstract summary: Fine-tuning a pre-trained deep neural network has become a successful paradigm in various machine learning tasks.
In this paper, we propose TabToken, a method aims at enhancing the quality of feature tokens.
We introduce a contrastive objective that regularizes the tokens, capturing the semantics within and across features.
- Score: 67.11727608815636
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fine-tuning a pre-trained deep neural network has become a successful
paradigm in various machine learning tasks. However, such a paradigm becomes
particularly challenging with tabular data when there are discrepancies between
the feature sets of pre-trained models and the target tasks. In this paper, we
propose TabToken, a method aims at enhancing the quality of feature tokens
(i.e., embeddings of tabular features). TabToken allows for the utilization of
pre-trained models when the upstream and downstream tasks share overlapping
features, facilitating model fine-tuning even with limited training examples.
Specifically, we introduce a contrastive objective that regularizes the tokens,
capturing the semantics within and across features. During the pre-training
stage, the tokens are learned jointly with top-layer deep models such as
transformer. In the downstream task, tokens of the shared features are kept
fixed while TabToken efficiently fine-tunes the remaining parts of the model.
TabToken not only enables knowledge transfer from a pre-trained model to tasks
with heterogeneous features, but also enhances the discriminative ability of
deep tabular models in standard classification and regression tasks.
Related papers
- Semformer: Transformer Language Models with Semantic Planning [18.750863564495006]
Next-token prediction serves as the dominant component in current neural language models.
We introduce Semformer, a novel method of training a Transformer language model that explicitly models the semantic planning of response.
arXiv Detail & Related papers (2024-09-17T12:54:34Z) - Making Pre-trained Language Models Great on Tabular Prediction [50.70574370855663]
The transferability of deep neural networks (DNNs) has made significant progress in image and language processing.
We present TP-BERTa, a specifically pre-trained LM for tabular data prediction.
A novel relative magnitude tokenization converts scalar numerical feature values to finely discrete, high-dimensional tokens, and an intra-feature attention approach integrates feature values with the corresponding feature names.
arXiv Detail & Related papers (2024-03-04T08:38:56Z) - Match me if you can: Semi-Supervised Semantic Correspondence Learning with Unpaired Images [76.47980643420375]
This paper builds on the hypothesis that there is an inherent data-hungry matter in learning semantic correspondences.
We demonstrate a simple machine annotator reliably enriches paired key points via machine supervision.
Our models surpass current state-of-the-art models on semantic correspondence learning benchmarks like SPair-71k, PF-PASCAL, and PF-WILLOW.
arXiv Detail & Related papers (2023-11-30T13:22:15Z) - ReConTab: Regularized Contrastive Representation Learning for Tabular
Data [8.178223284255791]
We introduce ReConTab, a deep automatic representation learning framework with regularized contrastive learning.
Agnostic to any type of modeling task, ReConTab constructs an asymmetric autoencoder based on the same raw features from model inputs.
Experiments conducted on extensive real-world datasets substantiate the framework's capacity to yield substantial and robust performance improvements.
arXiv Detail & Related papers (2023-10-28T00:05:28Z) - Distinguishability Calibration to In-Context Learning [31.375797763897104]
We propose a method to map a PLM-encoded embedding into a new metric space to guarantee the distinguishability of the resulting embeddings.
We also take the advantage of hyperbolic embeddings to capture the hierarchical relations among fine-grained class-associated token embedding.
arXiv Detail & Related papers (2023-02-13T09:15:00Z) - Transfer Learning with Deep Tabular Models [66.67017691983182]
We show that upstream data gives tabular neural networks a decisive advantage over GBDT models.
We propose a realistic medical diagnosis benchmark for tabular transfer learning.
We propose a pseudo-feature method for cases where the upstream and downstream feature sets differ.
arXiv Detail & Related papers (2022-06-30T14:24:32Z) - Few-Shot Learning with Siamese Networks and Label Tuning [5.006086647446482]
We show that with proper pre-training, Siamese Networks that embed texts and labels offer a competitive alternative.
We introduce label tuning, a simple and computationally efficient approach that allows to adapt the models in a few-shot setup by only changing the label embeddings.
arXiv Detail & Related papers (2022-03-28T11:16:46Z) - Token Dropping for Efficient BERT Pretraining [33.63507016806947]
We develop a simple but effective "token dropping" method to accelerate the pretraining of transformer models.
We leverage the already built-in masked language modeling (MLM) loss to identify unimportant tokens with practically no computational overhead.
This simple approach reduces the pretraining cost of BERT by 25% while achieving similar overall fine-tuning performance on standard downstream tasks.
arXiv Detail & Related papers (2022-03-24T17:50:46Z) - Few-shot Sequence Learning with Transformers [79.87875859408955]
Few-shot algorithms aim at learning new tasks provided only a handful of training examples.
In this work we investigate few-shot learning in the setting where the data points are sequences of tokens.
We propose an efficient learning algorithm based on Transformers.
arXiv Detail & Related papers (2020-12-17T12:30:38Z) - Train No Evil: Selective Masking for Task-Guided Pre-Training [97.03615486457065]
We propose a three-stage framework by adding a task-guided pre-training stage with selective masking between general pre-training and fine-tuning.
We show that our method can achieve comparable or even better performance with less than 50% of cost.
arXiv Detail & Related papers (2020-04-21T03:14:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.