Concentration and excess risk bounds for imbalanced classification with synthetic oversampling
- URL: http://arxiv.org/abs/2510.20472v1
- Date: Thu, 23 Oct 2025 12:12:51 GMT
- Title: Concentration and excess risk bounds for imbalanced classification with synthetic oversampling
- Authors: Touqeer Ahmad, Mohammadreza M. Kalan, François Portier, Gilles Stupfler,
- Abstract summary: We develop a theoretical framework to analyze the behavior of SMOTE and related methods when classifiers are trained on synthetic data.<n>Results lead to practical guidelines for better parameter tuning of both SMOTE and the downstream learning algorithm.
- Score: 5.974778743092435
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Synthetic oversampling of minority examples using SMOTE and its variants is a leading strategy for addressing imbalanced classification problems. Despite the success of this approach in practice, its theoretical foundations remain underexplored. We develop a theoretical framework to analyze the behavior of SMOTE and related methods when classifiers are trained on synthetic data. We first derive a uniform concentration bound on the discrepancy between the empirical risk over synthetic minority samples and the population risk on the true minority distribution. We then provide a nonparametric excess risk guarantee for kernel-based classifiers trained using such synthetic data. These results lead to practical guidelines for better parameter tuning of both SMOTE and the downstream learning algorithm. Numerical experiments are provided to illustrate and support the theoretical findings
Related papers
- Theoretical Convergence of SMOTE-Generated Samples [47.26889442476884]
We provide a rigorous theoretical analysis of SMOTE's convergence properties.<n>We prove that the synthetic random variable Z converges in probability to the underlying random variable X.<n>Lower values of the nearest neighbor rank lead to faster convergence.
arXiv Detail & Related papers (2026-01-05T09:19:45Z) - Large Language Models for Imbalanced Classification: Diversity makes the difference [40.03315488727788]
We propose a novel large language model (LLM)-based oversampling method designed to enhance diversity.<n>First, we introduce a sampling strategy that conditions synthetic sample generation on both minority labels and features.<n>Second, we develop a new permutation strategy for fine-tuning pre-trained LLMs.
arXiv Detail & Related papers (2025-10-10T18:45:29Z) - Learning Majority-to-Minority Transformations with MMD and Triplet Loss for Imbalanced Classification [0.5390869741300152]
Class imbalance in supervised classification often degrades model performance by biasing predictions toward the majority class.<n>We introduce an oversampling framework that learns a parametric transformation to map majority samples into the minority distribution.<n>Our approach minimizes the mean maximum discrepancy (MMD) between transformed and true minority samples for global alignment.
arXiv Detail & Related papers (2025-09-15T01:47:29Z) - Synthetic Oversampling: Theory and A Practical Approach Using LLMs to Address Data Imbalance [16.047084318753377]
Imbalanced classification and spurious correlation are common challenges in data science and machine learning.<n>Recent advances have proposed leveraging the flexibility and generative capabilities of large language models to generate synthetic samples.<n>This article develops novel theoretical foundations to systematically study the roles of synthetic samples in addressing imbalanced classification and spurious correlation.
arXiv Detail & Related papers (2024-06-05T21:24:26Z) - How does promoting the minority fraction affect generalization? A theoretical study of the one-hidden-layer neural network on group imbalance [64.1656365676171]
Group imbalance has been a known problem in empirical risk minimization.
This paper quantifies the impact of individual groups on the sample complexity, the convergence rate, and the average and group-level testing performance.
arXiv Detail & Related papers (2024-03-12T04:38:05Z) - Optimal Multi-Distribution Learning [88.3008613028333]
Multi-distribution learning seeks to learn a shared model that minimizes the worst-case risk across $k$ distinct data distributions.<n>We propose a novel algorithm that yields an varepsilon-optimal randomized hypothesis with a sample complexity on the order of (d+k)/varepsilon2.
arXiv Detail & Related papers (2023-12-08T16:06:29Z) - A Unified Generalization Analysis of Re-Weighting and Logit-Adjustment
for Imbalanced Learning [129.63326990812234]
We propose a technique named data-dependent contraction to capture how modified losses handle different classes.
On top of this technique, a fine-grained generalization bound is established for imbalanced learning, which helps reveal the mystery of re-weighting and logit-adjustment.
arXiv Detail & Related papers (2023-10-07T09:15:08Z) - Imbalanced Classification via a Tabular Translation GAN [4.864819846886142]
We present a model based on Generative Adversarial Networks which uses additional regularization losses to map majority samples to corresponding synthetic minority samples.
We show that the proposed method improves average precision when compared to alternative re-weighting and oversampling techniques.
arXiv Detail & Related papers (2022-04-19T06:02:53Z) - A Novel Adaptive Minority Oversampling Technique for Improved
Classification in Data Imbalanced Scenarios [23.257891827728827]
Imbalance in the proportion of training samples belonging to different classes often poses performance degradation of conventional classifiers.
We propose a novel three step technique to address imbalanced data.
arXiv Detail & Related papers (2021-03-24T09:58:02Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z) - Compressing Large Sample Data for Discriminant Analysis [78.12073412066698]
We consider the computational issues due to large sample size within the discriminant analysis framework.
We propose a new compression approach for reducing the number of training samples for linear and quadratic discriminant analysis.
arXiv Detail & Related papers (2020-05-08T05:09:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.