STaSy: Score-based Tabular data Synthesis
- URL: http://arxiv.org/abs/2210.04018v4
- Date: Mon, 29 May 2023 06:37:50 GMT
- Title: STaSy: Score-based Tabular data Synthesis
- Authors: Jayoung Kim, Chaejeong Lee, Noseong Park
- Abstract summary: We present a new model named Score-based Tabular data Synthesis (STaSy)
Our training strategy includes a self-paced learning technique and a fine-tuning strategy.
In our experiments, our method outperforms existing methods in terms of task-dependant evaluations and diversity.
- Score: 10.292096717484698
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Tabular data synthesis is a long-standing research topic in machine learning.
Many different methods have been proposed over the past decades, ranging from
statistical methods to deep generative methods. However, it has not always been
successful due to the complicated nature of real-world tabular data. In this
paper, we present a new model named Score-based Tabular data Synthesis (STaSy)
and its training strategy based on the paradigm of score-based generative
modeling. Despite the fact that score-based generative models have resolved
many issues in generative models, there still exists room for improvement in
tabular data synthesis. Our proposed training strategy includes a self-paced
learning technique and a fine-tuning strategy, which further increases the
sampling quality and diversity by stabilizing the denoising score matching
training. Furthermore, we also conduct rigorous experimental studies in terms
of the generative task trilemma: sampling quality, diversity, and time. In our
experiments with 15 benchmark tabular datasets and 7 baselines, our method
outperforms existing methods in terms of task-dependant evaluations and
diversity. Code is available at https://github.com/JayoungKim408/STaSy.
Related papers
- Noisy Self-Training with Synthetic Queries for Dense Retrieval [49.49928764695172]
We introduce a novel noisy self-training framework combined with synthetic queries.
Experimental results show that our method improves consistently over existing methods.
Our method is data efficient and outperforms competitive baselines.
arXiv Detail & Related papers (2023-11-27T06:19:50Z) - Reimagining Synthetic Tabular Data Generation through Data-Centric AI: A
Comprehensive Benchmark [56.8042116967334]
Synthetic data serves as an alternative in training machine learning models.
ensuring that synthetic data mirrors the complex nuances of real-world data is a challenging task.
This paper explores the potential of integrating data-centric AI techniques to guide the synthetic data generation process.
arXiv Detail & Related papers (2023-10-25T20:32:02Z) - A Discrepancy Aware Framework for Robust Anomaly Detection [51.710249807397695]
We present a Discrepancy Aware Framework (DAF), which demonstrates robust performance consistently with simple and cheap strategies.
Our method leverages an appearance-agnostic cue to guide the decoder in identifying defects, thereby alleviating its reliance on synthetic appearance.
Under the simple synthesis strategies, it outperforms existing methods by a large margin. Furthermore, it also achieves the state-of-the-art localization performance.
arXiv Detail & Related papers (2023-10-11T15:21:40Z) - Tackling Diverse Minorities in Imbalanced Classification [80.78227787608714]
Imbalanced datasets are commonly observed in various real-world applications, presenting significant challenges in training classifiers.
We propose generating synthetic samples iteratively by mixing data samples from both minority and majority classes.
We demonstrate the effectiveness of our proposed framework through extensive experiments conducted on seven publicly available benchmark datasets.
arXiv Detail & Related papers (2023-08-28T18:48:34Z) - Exploring Data Redundancy in Real-world Image Classification through
Data Selection [20.389636181891515]
Deep learning models often require large amounts of data for training, leading to increased costs.
We present two data valuation metrics based on Synaptic Intelligence and gradient norms, respectively, to study redundancy in real-world image data.
Online and offline data selection algorithms are then proposed via clustering and grouping based on the examined data values.
arXiv Detail & Related papers (2023-06-25T03:31:05Z) - Robust Learning with Progressive Data Expansion Against Spurious
Correlation [65.83104529677234]
We study the learning process of a two-layer nonlinear convolutional neural network in the presence of spurious features.
Our analysis suggests that imbalanced data groups and easily learnable spurious features can lead to the dominance of spurious features during the learning process.
We propose a new training algorithm called PDE that efficiently enhances the model's robustness for a better worst-group performance.
arXiv Detail & Related papers (2023-06-08T05:44:06Z) - Towards Automated Imbalanced Learning with Deep Hierarchical
Reinforcement Learning [57.163525407022966]
Imbalanced learning is a fundamental challenge in data mining, where there is a disproportionate ratio of training samples in each class.
Over-sampling is an effective technique to tackle imbalanced learning through generating synthetic samples for the minority class.
We propose AutoSMOTE, an automated over-sampling algorithm that can jointly optimize different levels of decisions.
arXiv Detail & Related papers (2022-08-26T04:28:01Z) - Efficient Classification with Counterfactual Reasoning and Active
Learning [4.708737212700907]
Methods called CCRAL combine causal reasoning to learn counterfactual samples for the original training samples and active learning to select useful counterfactual samples based on a region of uncertainty.
Experiments show that CCRAL achieves significantly better performance than those of the baselines in terms of accuracy and AUC.
arXiv Detail & Related papers (2022-07-25T12:03:40Z) - Contemporary Symbolic Regression Methods and their Relative Performance [5.285811942108162]
We assess 14 symbolic regression methods and 7 machine learning methods on a set of 252 diverse regression problems.
For the real-world datasets, we benchmark the ability of each method to learn models with low error and low complexity.
For the synthetic problems, we assess each method's ability to find exact solutions in the presence of varying levels of noise.
arXiv Detail & Related papers (2021-07-29T22:12:59Z) - OCT-GAN: Neural ODE-based Conditional Tabular GANs [8.062118111791495]
We introduce our generator and discriminator based on neural ordinary differential equations (NODEs)
We conduct experiments with 13 datasets, including but not limited to insurance fraud detection, online news article prediction, and so on.
arXiv Detail & Related papers (2021-05-31T13:58:55Z) - Foundations of Bayesian Learning from Synthetic Data [1.6249267147413522]
We use a Bayesian paradigm to characterise the updating of model parameters when learning on synthetic data.
Recent results from general Bayesian updating support a novel and robust approach to synthetic-learning founded on decision theory.
arXiv Detail & Related papers (2020-11-16T21:49:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.