Training-Free Generalization on Heterogeneous Tabular Data via
Meta-Representation
- URL: http://arxiv.org/abs/2311.00055v1
- Date: Tue, 31 Oct 2023 18:03:54 GMT
- Title: Training-Free Generalization on Heterogeneous Tabular Data via
Meta-Representation
- Authors: Han-Jia Ye, Qi-Le Zhou, De-Chuan Zhan
- Abstract summary: We propose Tabular data Pre-Training via Meta-representation (TabPTM)
A deep neural network is then trained to associate these meta-representations with dataset-specific classification confidences.
Experiments validate that TabPTM achieves promising performance in new datasets, even under few-shot scenarios.
- Score: 67.30538142519067
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tabular data is prevalent across various machine learning domains. Yet, the
inherent heterogeneities in attribute and class spaces across different tabular
datasets hinder the effective sharing of knowledge, limiting a tabular model to
benefit from other datasets. In this paper, we propose Tabular data
Pre-Training via Meta-representation (TabPTM), which allows one tabular model
pre-training on a set of heterogeneous datasets. Then, this pre-trained model
can be directly applied to unseen datasets that have diverse attributes and
classes without additional training. Specifically, TabPTM represents an
instance through its distance to a fixed number of prototypes, thereby
standardizing heterogeneous tabular datasets. A deep neural network is then
trained to associate these meta-representations with dataset-specific
classification confidences, endowing TabPTM with the ability of training-free
generalization. Experiments validate that TabPTM achieves promising performance
in new datasets, even under few-shot scenarios.
Related papers
- Zero-shot Meta-learning for Tabular Prediction Tasks with Adversarially Pre-trained Transformer [2.1677183904102257]
We present an Adversarially Pre-trained Transformer (APT) that is able to perform zero-shot meta-learning on tabular prediction tasks without pre-training on any real-world dataset.
APT is pre-trained with adversarial synthetic data agents, who deliberately challenge the model with different synthetic datasets.
We show that our framework matches state-of-the-art performance on small classification tasks without filtering on dataset characteristics.
arXiv Detail & Related papers (2025-02-06T23:58:11Z) - Cross-Table Pretraining towards a Universal Function Space for Heterogeneous Tabular Data [35.61663559675556]
Cross-dataset pretraining has shown notable success in various fields.
In this study, we introduce a cross-table pretrained Transformer, XTFormer, for versatile downstream tabular prediction tasks.
Our methodology is pretraining XTFormer to establish a "meta-function" space that encompasses all potential feature-target mappings.
arXiv Detail & Related papers (2024-06-01T03:24:31Z) - Tabular Few-Shot Generalization Across Heterogeneous Feature Spaces [43.67453625260335]
We propose a novel approach to few-shot learning involving knowledge sharing between datasets with heterogeneous feature spaces.
FLAT learns low-dimensional embeddings of datasets and their individual columns, which facilitate knowledge transfer and generalization to previously unseen datasets.
A decoder network parametrizes the predictive target network, implemented as a Graph Attention Network, to accommodate the heterogeneous nature of tabular datasets.
arXiv Detail & Related papers (2023-11-16T17:45:59Z) - M-Tuning: Prompt Tuning with Mitigated Label Bias in Open-Set Scenarios [58.617025733655005]
We propose a vision-language prompt tuning method with mitigated label bias (M-Tuning)
It introduces open words from the WordNet to extend the range of words forming the prompt texts from only closed-set label words to more, and thus prompts are tuned in a simulated open-set scenario.
Our method achieves the best performance on datasets with various scales, and extensive ablation studies also validate its effectiveness.
arXiv Detail & Related papers (2023-03-09T09:05:47Z) - Is margin all you need? An extensive empirical study of active learning
on tabular data [66.18464006872345]
We analyze the performance of a variety of active learning algorithms on 69 real-world datasets from the OpenML-CC18 benchmark.
Surprisingly, we find that the classical margin sampling technique matches or outperforms all others, including current state-of-art.
arXiv Detail & Related papers (2022-10-07T21:18:24Z) - Learning Enhanced Representations for Tabular Data via Neighborhood
Propagation [24.485479610138498]
We construct a hypergraph to model the cross-row and cross-column patterns of data instances.
We then perform message propagation to enhance the target data instance representation.
Experiments on two important data prediction tasks validate the superiority of the proposed PET model.
arXiv Detail & Related papers (2022-06-14T04:24:52Z) - SubTab: Subsetting Features of Tabular Data for Self-Supervised
Representation Learning [5.5616364225463055]
We introduce a new framework, Subsetting features of Tabular data (SubTab)
In this paper, we introduce a new framework, Subsetting features of Tabular data (SubTab)
We argue that reconstructing the data from the subset of its features rather than its corrupted version in an autoencoder setting can better capture its underlying representation.
arXiv Detail & Related papers (2021-10-08T20:11:09Z) - Simple multi-dataset detection [83.9604523643406]
We present a simple method for training a unified detector on multiple large-scale datasets.
We show how to automatically integrate dataset-specific outputs into a common semantic taxonomy.
Our approach does not require manual taxonomy reconciliation.
arXiv Detail & Related papers (2021-02-25T18:55:58Z) - i-Mix: A Domain-Agnostic Strategy for Contrastive Representation
Learning [117.63815437385321]
We propose i-Mix, a simple yet effective domain-agnostic regularization strategy for improving contrastive representation learning.
In experiments, we demonstrate that i-Mix consistently improves the quality of learned representations across domains.
arXiv Detail & Related papers (2020-10-17T23:32:26Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.