MissDDIM: Deterministic and Efficient Conditional Diffusion for Tabular Data Imputation
- URL: http://arxiv.org/abs/2508.03083v1
- Date: Tue, 05 Aug 2025 04:55:26 GMT
- Title: MissDDIM: Deterministic and Efficient Conditional Diffusion for Tabular Data Imputation
- Authors: Youran Zhou, Mohamed Reda Bouadjenek, Sunil Aryal,
- Abstract summary: We present MissDDIM, a conditional diffusion framework that adapts Denoising Diffusion Implicit Models (DDIM) for tabular imputation.<n>While sampling enables diverse completions, it also introduces output variability that complicates downstream processing.
- Score: 2.124791625488617
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Diffusion models have recently emerged as powerful tools for missing data imputation by modeling the joint distribution of observed and unobserved variables. However, existing methods, typically based on stochastic denoising diffusion probabilistic models (DDPMs), suffer from high inference latency and variable outputs, limiting their applicability in real-world tabular settings. To address these deficiencies, we present in this paper MissDDIM, a conditional diffusion framework that adapts Denoising Diffusion Implicit Models (DDIM) for tabular imputation. While stochastic sampling enables diverse completions, it also introduces output variability that complicates downstream processing.
Related papers
- Non-stationary Diffusion For Probabilistic Time Series Forecasting [3.7687375904925484]
We develop a diffusion-based probabilistic forecasting framework, termed Non-stationary Diffusion (NsDiff)<n>NsDiff combines a denoising diffusion-based conditional generative model with a pre-trained conditional mean and variance estimator.<n>Experiments conducted on nine real-world and synthetic datasets demonstrate the superior performance of NsDiff compared to existing approaches.
arXiv Detail & Related papers (2025-05-07T09:29:39Z) - Simple and Critical Iterative Denoising: A Recasting of Discrete Diffusion in Graph Generation [0.0]
dependencies between intermediate noisy states lead to error accumulation and propagation during the reverse denoising process.<n>We propose a novel framework called Simple Iterative Denoising, which simplifies discrete diffusion and circumvents the issue.<n>Our empirical evaluations demonstrate that the proposed method significantly outperforms existing discrete diffusion baselines in graph generation tasks.
arXiv Detail & Related papers (2025-03-27T15:08:58Z) - Interleaved Gibbs Diffusion: Generating Discrete-Continuous Data with Implicit Constraints [30.624303845550575]
Interleaved Gibbs Diffusion (IGD) is a novel generative modeling framework for discrete-continuous data.<n>IGD generalizes discrete time Gibbs sampling type Markov chain for the case of discrete-continuous generation.<n>It achieves state-of-the-art results without relying on domain-specific inductive biases.
arXiv Detail & Related papers (2025-02-19T05:51:24Z) - Generalized Diffusion Model with Adjusted Offset Noise [1.7767466724342067]
We propose a generalized diffusion model that naturally incorporates additional noise within a rigorous probabilistic framework.<n>We derive a loss function based on the evidence lower bound, establishing its theoretical equivalence to offset noise with certain adjustments.<n>Experiments on synthetic datasets demonstrate that our model effectively addresses brightness-related challenges and outperforms conventional methods in high-dimensional scenarios.
arXiv Detail & Related papers (2024-12-04T08:57:03Z) - Self-Supervision Improves Diffusion Models for Tabular Data Imputation [20.871219616589986]
This paper introduces an advanced diffusion model named Self-supervised imputation Diffusion Model (SimpDM for brevity)
To mitigate sensitivity to noise, we introduce a self-supervised alignment mechanism that aims to regularize the model, ensuring consistent and stable imputation predictions.
We also introduce a carefully devised state-dependent data augmentation strategy within SimpDM, enhancing the robustness of the diffusion model when dealing with limited data.
arXiv Detail & Related papers (2024-07-25T13:06:30Z) - Your Absorbing Discrete Diffusion Secretly Models the Conditional Distributions of Clean Data [55.54827581105283]
We show that the concrete score in absorbing diffusion can be expressed as conditional probabilities of clean data.<n>We propose a dedicated diffusion model without time-condition that characterizes the time-independent conditional probabilities.<n>Our models achieve SOTA performance among diffusion models on 5 zero-shot language modeling benchmarks.
arXiv Detail & Related papers (2024-06-06T04:22:11Z) - DiffPuter: Empowering Diffusion Models for Missing Data Imputation [56.48119008663155]
This paper introduces DiffPuter, a tailored diffusion model combined with the Expectation-Maximization (EM) algorithm for missing data imputation.<n>Our theoretical analysis shows that DiffPuter's training step corresponds to the maximum likelihood estimation of data density.<n>Our experiments show that DiffPuter achieves an average improvement of 6.94% in MAE and 4.78% in RMSE compared to the most competitive existing method.
arXiv Detail & Related papers (2024-05-31T08:35:56Z) - Diffusion models for probabilistic programming [56.47577824219207]
Diffusion Model Variational Inference (DMVI) is a novel method for automated approximate inference in probabilistic programming languages (PPLs)
DMVI is easy to implement, allows hassle-free inference in PPLs without the drawbacks of, e.g., variational inference using normalizing flows, and does not make any constraints on the underlying neural network model.
arXiv Detail & Related papers (2023-11-01T12:17:05Z) - MissDiff: Training Diffusion Models on Tabular Data with Missing Values [29.894691645801597]
This work presents a unified and principled diffusion-based framework for learning from data with missing values.
We first observe that the widely adopted "impute-then-generate" pipeline may lead to a biased learning objective.
We prove the proposed method is consistent in learning the score of data distributions, and the proposed training objective serves as an upper bound for the negative likelihood in certain cases.
arXiv Detail & Related papers (2023-07-02T03:49:47Z) - Semi-Implicit Denoising Diffusion Models (SIDDMs) [50.30163684539586]
Existing models such as Denoising Diffusion Probabilistic Models (DDPM) deliver high-quality, diverse samples but are slowed by an inherently high number of iterative steps.
We introduce a novel approach that tackles the problem by matching implicit and explicit factors.
We demonstrate that our proposed method obtains comparable generative performance to diffusion-based models and vastly superior results to models with a small number of sampling steps.
arXiv Detail & Related papers (2023-06-21T18:49:22Z) - DisDiff: Unsupervised Disentanglement of Diffusion Probabilistic Models [42.58375679841317]
We propose a new task, disentanglement of Diffusion Probabilistic Models (DPMs)
The task is to automatically discover the inherent factors behind the observations and disentangle the gradient fields of DPM into sub-gradient fields.
We devise an unsupervised approach named DisDiff, achieving disentangled representation learning in the framework of DPMs.
arXiv Detail & Related papers (2023-01-31T15:58:32Z) - Score-based Continuous-time Discrete Diffusion Models [102.65769839899315]
We extend diffusion models to discrete variables by introducing a Markov jump process where the reverse process denoises via a continuous-time Markov chain.
We show that an unbiased estimator can be obtained via simple matching the conditional marginal distributions.
We demonstrate the effectiveness of the proposed method on a set of synthetic and real-world music and image benchmarks.
arXiv Detail & Related papers (2022-11-30T05:33:29Z) - Discrete Denoising Flows [87.44537620217673]
We introduce a new discrete flow-based model for categorical random variables: Discrete Denoising Flows (DDFs)
In contrast with other discrete flow-based models, our model can be locally trained without introducing gradient bias.
We show that DDFs outperform Discrete Flows on modeling a toy example, binary MNIST and Cityscapes segmentation maps, measured in log-likelihood.
arXiv Detail & Related papers (2021-07-24T14:47:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.