Disjoint Generative Models
- URL: http://arxiv.org/abs/2507.19700v1
- Date: Fri, 25 Jul 2025 22:38:06 GMT
- Title: Disjoint Generative Models
- Authors: Anton Danholt Lautrup, Muhammad Rajabinasab, Tobias Hyrup, Arthur Zimek, Peter Schneider-Kamp,
- Abstract summary: We propose a new framework for generating cross-sectional synthetic datasets via disjoint generative models.<n>In this paradigm, a dataset is partitioned into disjoint subsets that are supplied to separate instances of generative models.<n>Results are then combined post hoc by a joining operation that works in the absence of common variables/identifiers.
- Score: 3.1318583335754333
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a new framework for generating cross-sectional synthetic datasets via disjoint generative models. In this paradigm, a dataset is partitioned into disjoint subsets that are supplied to separate instances of generative models. The results are then combined post hoc by a joining operation that works in the absence of common variables/identifiers. The success of the framework is demonstrated through several case studies and examples on tabular data that helps illuminate some of the design choices that one may make. The principal benefit of disjoint generative models is significantly increased privacy at only a low utility cost. Additional findings include increased effectiveness and feasibility for certain model types and the possibility for mixed-model synthesis.
Related papers
- Synergistic Benefits of Joint Molecule Generation and Property Prediction [6.865957689890204]
Hyformer is a transformer-based joint model that blends the generative and predictive functionalities.<n>We show that Hyformer is simultaneously optimized for molecule generation and property prediction.<n>We also demonstrate the benefits of joint learning in a drug design use case of discovering novel antimicrobialpeptides.
arXiv Detail & Related papers (2025-04-23T09:36:46Z) - Statistical inference on black-box generative models in the data kernel perspective space [10.948308354932639]
We extend results on representations of black-box generative models to model-level statistical inference tasks.<n>We demonstrate that the model-level representations are effective for multiple inference tasks.
arXiv Detail & Related papers (2024-10-01T22:28:39Z) - ComboStoc: Combinatorial Stochasticity for Diffusion Generative Models [65.82630283336051]
We show that the space spanned by the combination of dimensions and attributes is insufficiently sampled by existing training scheme of diffusion generative models.
We present a simple fix to this problem by constructing processes that fully exploit the structures, hence the name ComboStoc.
arXiv Detail & Related papers (2024-05-22T15:23:10Z) - Compositional Generative Modeling: A Single Model is Not All You Need [29.050431676226115]
We argue that we should instead construct large generative systems by composing smaller generative models together.
We show how such a compositional generative approach enables us to learn distributions in a more data-efficient manner.
arXiv Detail & Related papers (2024-02-02T02:40:51Z) - DORE: Document Ordered Relation Extraction based on Generative Framework [56.537386636819626]
This paper investigates the root cause of the underwhelming performance of the existing generative DocRE models.
We propose to generate a symbolic and ordered sequence from the relation matrix which is deterministic and easier for model to learn.
Experimental results on four datasets show that our proposed method can improve the performance of the generative DocRE models.
arXiv Detail & Related papers (2022-10-28T11:18:10Z) - De-Biasing Generative Models using Counterfactual Methods [0.0]
We propose a new decoder based framework named the Causal Counterfactual Generative Model (CCGM)
Our proposed method combines a causal latent space VAE model with specific modification to emphasize causal fidelity.
We explore how better disentanglement of causal learning and encoding/decoding generates higher causal intervention quality.
arXiv Detail & Related papers (2022-07-04T16:53:20Z) - Hybrid Feature- and Similarity-Based Models for Prediction and
Interpretation using Large-Scale Observational Data [0.0]
We propose a hybrid feature- and similarity-based model for supervised learning.
The proposed hybrid model is fit by convex optimization with a sparsity-inducing penalty on the kernel portion.
We compared our models to solely feature- and similarity-based approaches using synthetic data and using EHR data to predict risk of loneliness or social isolation.
arXiv Detail & Related papers (2022-04-12T20:37:03Z) - Hierarchical Few-Shot Generative Models [18.216729811514718]
We study a latent variables approach that extends the Neural Statistician to a fully hierarchical approach with an attention-based point to set-level aggregation.
Our results show that the hierarchical formulation better captures the intrinsic variability within the sets in the small data regime.
arXiv Detail & Related papers (2021-10-23T19:19:39Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Robust Finite Mixture Regression for Heterogeneous Targets [70.19798470463378]
We propose an FMR model that finds sample clusters and jointly models multiple incomplete mixed-type targets simultaneously.
We provide non-asymptotic oracle performance bounds for our model under a high-dimensional learning framework.
The results show that our model can achieve state-of-the-art performance.
arXiv Detail & Related papers (2020-10-12T03:27:07Z) - Partially Conditioned Generative Adversarial Networks [75.08725392017698]
Generative Adversarial Networks (GANs) let one synthesise artificial datasets by implicitly modelling the underlying probability distribution of a real-world training dataset.
With the introduction of Conditional GANs and their variants, these methods were extended to generating samples conditioned on ancillary information available for each sample within the dataset.
In this work, we argue that standard Conditional GANs are not suitable for such a task and propose a new Adversarial Network architecture and training strategy.
arXiv Detail & Related papers (2020-07-06T15:59:28Z) - Relating by Contrasting: A Data-efficient Framework for Multimodal
Generative Models [86.9292779620645]
We develop a contrastive framework for generative model learning, allowing us to train the model not just by the commonality between modalities, but by the distinction between "related" and "unrelated" multimodal data.
Under our proposed framework, the generative model can accurately identify related samples from unrelated ones, making it possible to make use of the plentiful unlabeled, unpaired multimodal data.
arXiv Detail & Related papers (2020-07-02T15:08:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.