Binning as a Pretext Task: Improving Self-Supervised Learning in Tabular Domains
- URL: http://arxiv.org/abs/2405.07414v2
- Date: Tue, 14 May 2024 01:29:37 GMT
- Title: Binning as a Pretext Task: Improving Self-Supervised Learning in Tabular Domains
- Authors: Kyungeun Lee, Ye Seul Sim, Hye-Seung Cho, Moonjung Eo, Suhee Yoon, Sanghyu Yoon, Woohyung Lim,
- Abstract summary: We propose a novel pretext task based on the classical binning method.
The idea is straightforward: reconstructing the bin indices (either orders or classes) rather than the original values.
Our empirical investigations ascertain several advantages of binning.
- Score: 0.565395466029518
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The ability of deep networks to learn superior representations hinges on leveraging the proper inductive biases, considering the inherent properties of datasets. In tabular domains, it is critical to effectively handle heterogeneous features (both categorical and numerical) in a unified manner and to grasp irregular functions like piecewise constant functions. To address the challenges in the self-supervised learning framework, we propose a novel pretext task based on the classical binning method. The idea is straightforward: reconstructing the bin indices (either orders or classes) rather than the original values. This pretext task provides the encoder with an inductive bias to capture the irregular dependencies, mapping from continuous inputs to discretized bins, and mitigates the feature heterogeneity by setting all features to have category-type targets. Our empirical investigations ascertain several advantages of binning: capturing the irregular function, compatibility with encoder architecture and additional modifications, standardizing all features into equal sets, grouping similar values within a feature, and providing ordering information. Comprehensive evaluations across diverse tabular datasets corroborate that our method consistently improves tabular representation learning performance for a wide range of downstream tasks. The codes are available in https://github.com/kyungeun-lee/tabularbinning.
Related papers
- TabSeq: A Framework for Deep Learning on Tabular Data via Sequential Ordering [5.946579489162407]
This work introduces TabSeq, a novel framework for the sequential ordering of features.
Finding the optimum sequence order for such features could improve the deep learning models' learning process.
arXiv Detail & Related papers (2024-10-17T04:10:36Z) - A Fixed-Point Approach to Unified Prompt-Based Counting [51.20608895374113]
This paper aims to establish a comprehensive prompt-based counting framework capable of generating density maps for objects indicated by various prompt types, such as box, point, and text.
Our model excels in prominent class-agnostic datasets and exhibits superior performance in cross-dataset adaptation tasks.
arXiv Detail & Related papers (2024-03-15T12:05:44Z) - Enhancing Neural Subset Selection: Integrating Background Information into Set Representations [53.15923939406772]
We show that when the target value is conditioned on both the input set and subset, it is essential to incorporate an textitinvariant sufficient statistic of the superset into the subset of interest.
This ensures that the output value remains invariant to permutations of the subset and its corresponding superset, enabling identification of the specific superset from which the subset originated.
arXiv Detail & Related papers (2024-02-05T16:09:35Z) - SwitchTab: Switched Autoencoders Are Effective Tabular Learners [16.316153704284936]
We introduce SwitchTab, a novel self-supervised representation method for tabular data.
SwitchTab captures latent dependencies by decouples mutual and salient features among data pairs.
Results show superior performance in end-to-end prediction tasks with fine-tuning.
We highlight the capability of SwitchTab to create explainable representations through visualization of decoupled mutual and salient features in the latent space.
arXiv Detail & Related papers (2024-01-04T01:05:45Z) - A Performance-Driven Benchmark for Feature Selection in Tabular Deep
Learning [131.2910403490434]
Data scientists typically collect as many features as possible into their datasets, and even engineer new features from existing ones.
Existing benchmarks for tabular feature selection consider classical downstream models, toy synthetic datasets, or do not evaluate feature selectors on the basis of downstream performance.
We construct a challenging feature selection benchmark evaluated on downstream neural networks including transformers.
We also propose an input-gradient-based analogue of Lasso for neural networks that outperforms classical feature selection methods on challenging problems.
arXiv Detail & Related papers (2023-11-10T05:26:10Z) - ReConTab: Regularized Contrastive Representation Learning for Tabular
Data [8.178223284255791]
We introduce ReConTab, a deep automatic representation learning framework with regularized contrastive learning.
Agnostic to any type of modeling task, ReConTab constructs an asymmetric autoencoder based on the same raw features from model inputs.
Experiments conducted on extensive real-world datasets substantiate the framework's capacity to yield substantial and robust performance improvements.
arXiv Detail & Related papers (2023-10-28T00:05:28Z) - Equivariance with Learned Canonicalization Functions [77.32483958400282]
We show that learning a small neural network to perform canonicalization is better than using predefineds.
Our experiments show that learning the canonicalization function is competitive with existing techniques for learning equivariant functions across many tasks.
arXiv Detail & Related papers (2022-11-11T21:58:15Z) - Exploring Category-correlated Feature for Few-shot Image Classification [27.13708881431794]
We present a simple yet effective feature rectification method by exploring the category correlation between novel and base classes as the prior knowledge.
The proposed approach consistently obtains considerable performance gains on three widely used benchmarks.
arXiv Detail & Related papers (2021-12-14T08:25:24Z) - Learning Debiased and Disentangled Representations for Semantic
Segmentation [52.35766945827972]
We propose a model-agnostic and training scheme for semantic segmentation.
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes.
Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-31T16:15:09Z) - Automated Concatenation of Embeddings for Structured Prediction [75.44925576268052]
We propose Automated Concatenation of Embeddings (ACE) to automate the process of finding better concatenations of embeddings for structured prediction tasks.
We follow strategies in reinforcement learning to optimize the parameters of the controller and compute the reward based on the accuracy of a task model.
arXiv Detail & Related papers (2020-10-10T14:03:20Z) - Counterfactual Explanation Based on Gradual Construction for Deep
Networks [17.79934085808291]
The patterns that deep networks have learned from a training dataset can be grasped by observing the feature variation among various classes.
Current approaches perform the feature modification to increase the classification probability for the target class irrespective of the internal characteristics of deep networks.
We propose a counterfactual explanation method that exploits the statistics learned from a training dataset.
arXiv Detail & Related papers (2020-08-05T01:18:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.