Uncertainty-Aware Generative Oversampling Using an Entropy-Guided Conditional Variational Autoencoder
- URL: http://arxiv.org/abs/2509.25334v3
- Date: Sat, 04 Oct 2025 17:53:25 GMT
- Title: Uncertainty-Aware Generative Oversampling Using an Entropy-Guided Conditional Variational Autoencoder
- Authors: Amirhossein Zare, Amirhessam Zare, Parmida Sadat Pezeshki, Herlock, Rahimi, Ali Ebrahimi, Ignacio Vázquez-García, Leo Anthony Celi,
- Abstract summary: We propose a generative oversampling framework that incorporates local uncertainty into both representation learning and data generation.<n>We show that LEO-CVAE consistently improves performance, outperforming both traditional oversampling and generative baselines.<n>These results highlight the value of uncertainty-aware generative oversampling for imbalanced learning in domains governed by complex nonlinear structures.
- Score: 2.1629319643623655
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Class imbalance remains a major challenge in machine learning, especially for high-dimensional biomedical data where nonlinear manifold structures dominate. Traditional oversampling methods such as SMOTE rely on local linear interpolation, often producing implausible synthetic samples. Deep generative models like Conditional Variational Autoencoders (CVAEs) better capture nonlinear distributions, but standard variants treat all minority samples equally, neglecting the importance of uncertain, boundary-region examples emphasized by heuristic methods like Borderline-SMOTE and ADASYN. We propose Local Entropy-Guided Oversampling with a CVAE (LEO-CVAE), a generative oversampling framework that explicitly incorporates local uncertainty into both representation learning and data generation. To quantify uncertainty, we compute Shannon entropy over the class distribution in a sample's neighborhood: high entropy indicates greater class overlap, serving as a proxy for uncertainty. LEO-CVAE leverages this signal through two mechanisms: (i) a Local Entropy-Weighted Loss (LEWL) that emphasizes robust learning in uncertain regions, and (ii) an entropy-guided sampling strategy that concentrates generation in these informative, class-overlapping areas. Applied to clinical genomics datasets (ADNI and TCGA lung cancer), LEO-CVAE consistently improves classifier performance, outperforming both traditional oversampling and generative baselines. These results highlight the value of uncertainty-aware generative oversampling for imbalanced learning in domains governed by complex nonlinear structures, such as omics data.
Related papers
- Learning Majority-to-Minority Transformations with MMD and Triplet Loss for Imbalanced Classification [0.5390869741300152]
Class imbalance in supervised classification often degrades model performance by biasing predictions toward the majority class.<n>We introduce an oversampling framework that learns a parametric transformation to map majority samples into the minority distribution.<n>Our approach minimizes the mean maximum discrepancy (MMD) between transformed and true minority samples for global alignment.
arXiv Detail & Related papers (2025-09-15T01:47:29Z) - Generate Aligned Anomaly: Region-Guided Few-Shot Anomaly Image-Mask Pair Synthesis for Industrial Inspection [53.137651284042434]
Anomaly inspection plays a vital role in industrial manufacturing, but the scarcity of anomaly samples limits the effectiveness of existing methods.<n>We propose Generate grained Anomaly (GAA), a region-guided, few-shot anomaly image-mask pair generation framework.<n>GAA generates realistic, diverse, and semantically aligned anomalies using only a small number of samples.
arXiv Detail & Related papers (2025-07-13T12:56:59Z) - Robust Molecular Property Prediction via Densifying Scarce Labeled Data [53.24886143129006]
In drug discovery, compounds most critical for advancing research often lie beyond the training set.<n>We propose a novel bilevel optimization approach that leverages unlabeled data to interpolate between in-distribution (ID) and out-of-distribution (OOD) data.
arXiv Detail & Related papers (2025-06-13T15:27:40Z) - SMOGAN: Synthetic Minority Oversampling with GAN Refinement for Imbalanced Regression [0.0]
Imbalanced regression refers to prediction tasks where the target variable is skewed.<n>This skewness hinders machine learning models, especially neural networks, which concentrate on dense regions.<n>We propose SMOGAN, a two-step oversampling framework for imbalanced regression.
arXiv Detail & Related papers (2025-04-29T20:15:25Z) - Local distribution-based adaptive oversampling for imbalanced regression [0.0]
Imbalanced regression occurs when continuous target variables have skewed distributions, creating sparse regions.<n>We propose LDAO (Local Distribution-based Adaptive Oversampling), a novel data-level approach that avoids categorizing individual samples as rare or frequent.<n> LDAO achieves a balanced representation across the entire target range while preserving the inherent statistical structure within each local distribution.
arXiv Detail & Related papers (2025-04-19T14:36:41Z) - INGB: Informed Nonlinear Granular Ball Oversampling Framework for Noisy
Imbalanced Classification [23.9207014576848]
In classification problems, the datasets are usually imbalanced, noisy or complex.
An informed nonlinear oversampling framework with the granular ball (INGB) as a new direction of oversampling is proposed in this paper.
arXiv Detail & Related papers (2023-07-03T01:55:20Z) - Variational Classification [51.2541371924591]
We derive a variational objective to train the model, analogous to the evidence lower bound (ELBO) used to train variational auto-encoders.
Treating inputs to the softmax layer as samples of a latent variable, our abstracted perspective reveals a potential inconsistency.
We induce a chosen latent distribution, instead of the implicit assumption found in a standard softmax layer.
arXiv Detail & Related papers (2023-05-17T17:47:19Z) - Boosting Differentiable Causal Discovery via Adaptive Sample Reweighting [62.23057729112182]
Differentiable score-based causal discovery methods learn a directed acyclic graph from observational data.
We propose a model-agnostic framework to boost causal discovery performance by dynamically learning the adaptive weights for the Reweighted Score function, ReScore.
arXiv Detail & Related papers (2023-03-06T14:49:59Z) - GANs with Variational Entropy Regularizers: Applications in Mitigating
the Mode-Collapse Issue [95.23775347605923]
Building on the success of deep learning, Generative Adversarial Networks (GANs) provide a modern approach to learn a probability distribution from observed samples.
GANs often suffer from the mode collapse issue where the generator fails to capture all existing modes of the input distribution.
We take an information-theoretic approach and maximize a variational lower bound on the entropy of the generated samples to increase their diversity.
arXiv Detail & Related papers (2020-09-24T19:34:37Z) - Repulsive Mixture Models of Exponential Family PCA for Clustering [127.90219303669006]
The mixture extension of exponential family principal component analysis ( EPCA) was designed to encode much more structural information about data distribution than the traditional EPCA.
The traditional mixture of local EPCAs has the problem of model redundancy, i.e., overlaps among mixing components, which may cause ambiguity for data clustering.
In this paper, a repulsiveness-encouraging prior is introduced among mixing components and a diversified EPCA mixture (DEPCAM) model is developed in the Bayesian framework.
arXiv Detail & Related papers (2020-04-07T04:07:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.