Optimal Transport-Guided Conditional Score-Based Diffusion Models
- URL: http://arxiv.org/abs/2311.01226v1
- Date: Thu, 2 Nov 2023 13:28:44 GMT
- Title: Optimal Transport-Guided Conditional Score-Based Diffusion Models
- Authors: Xiang Gu, Liwei Yang, Jian Sun, Zongben Xu
- Abstract summary: Conditional score-based diffusion model (SBDM) is for conditional generation of target data with paired data as condition, and has achieved great success in image translation.
To tackle the applications with partially paired or even unpaired dataset, we propose a novel Optimal Transport-guided Conditional Score-based diffusion model (OTCS) in this paper.
- Score: 63.14903268958398
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conditional score-based diffusion model (SBDM) is for conditional generation
of target data with paired data as condition, and has achieved great success in
image translation. However, it requires the paired data as condition, and there
would be insufficient paired data provided in real-world applications. To
tackle the applications with partially paired or even unpaired dataset, we
propose a novel Optimal Transport-guided Conditional Score-based diffusion
model (OTCS) in this paper. We build the coupling relationship for the unpaired
or partially paired dataset based on $L_2$-regularized unsupervised or
semi-supervised optimal transport, respectively. Based on the coupling
relationship, we develop the objective for training the conditional score-based
model for unpaired or partially paired settings, which is based on a
reformulation and generalization of the conditional SBDM for paired setting.
With the estimated coupling relationship, we effectively train the conditional
score-based model by designing a ``resampling-by-compatibility'' strategy to
choose the sampled data with high compatibility as guidance. Extensive
experiments on unpaired super-resolution and semi-paired image-to-image
translation demonstrated the effectiveness of the proposed OTCS model. From the
viewpoint of optimal transport, OTCS provides an approach to transport data
across distributions, which is a challenge for OT on large-scale datasets. We
theoretically prove that OTCS realizes the data transport in OT with a
theoretical bound. Code is available at \url{https://github.com/XJTU-XGU/OTCS}.
Related papers
- Everything to the Synthetic: Diffusion-driven Test-time Adaptation via Synthetic-Domain Alignment [76.44483062571611]
Test-time adaptation (TTA) aims to enhance the performance of source-domain pretrained models when tested on unknown shifted target domains.
Traditional TTA methods primarily adapt model weights based on target data streams, making model performance sensitive to the amount and order of target data.
Recent diffusion-driven TTA methods have demonstrated strong performance by using an unconditional diffusion model.
arXiv Detail & Related papers (2024-06-06T17:39:09Z) - CondTSF: One-line Plugin of Dataset Condensation for Time Series Forecasting [22.473436770730657]
The objective of dataset condensation is to ensure that the model trained with the synthetic dataset can perform comparably to the model trained with full datasets.
In classification, the synthetic data is considered well-distilled if the model trained with the full dataset and the model trained with the synthetic dataset yield identical labels for the same input.
In TS-forecasting, the effectiveness of synthetic data distillation is determined by the distance between predictions of the two models.
arXiv Detail & Related papers (2024-06-04T09:18:20Z) - Efficient adjustment for complex covariates: Gaining efficiency with
DOPE [56.537164957672715]
We propose a framework that accommodates adjustment for any subset of information expressed by the covariates.
Based on our theoretical results, we propose the Debiased Outcome-adapted Propensity Estorimator (DOPE) for efficient estimation of the average treatment effect (ATE)
Our results show that the DOPE provides an efficient and robust methodology for ATE estimation in various observational settings.
arXiv Detail & Related papers (2024-02-20T13:02:51Z) - Generative Modeling through the Semi-dual Formulation of Unbalanced
Optimal Transport [9.980822222343921]
We propose a novel generative model based on the semi-dual formulation of Unbalanced Optimal Transport (UOT)
Unlike OT, UOT relaxes the hard constraint on distribution matching. This approach provides better robustness against outliers, stability during training, and faster convergence.
Our model outperforms existing OT-based generative models, achieving FID scores of 2.97 on CIFAR-10 and 6.36 on CelebA-HQ-256.
arXiv Detail & Related papers (2023-05-24T06:31:05Z) - Learning the joint distribution of two sequences using little or no
paired data [16.189575655434844]
We present a noisy channel generative model of two sequences, for example text and speech.
We show that even tiny amount of paired data is sufficient to learn to relate the two modalities when a massive amount of unpaired data is available.
arXiv Detail & Related papers (2022-12-06T18:56:15Z) - Comparing Probability Distributions with Conditional Transport [63.11403041984197]
We propose conditional transport (CT) as a new divergence and approximate it with the amortized CT (ACT) cost.
ACT amortizes the computation of its conditional transport plans and comes with unbiased sample gradients that are straightforward to compute.
On a wide variety of benchmark datasets generative modeling, substituting the default statistical distance of an existing generative adversarial network with ACT is shown to consistently improve the performance.
arXiv Detail & Related papers (2020-12-28T05:14:22Z) - Autoregressive Score Matching [113.4502004812927]
We propose autoregressive conditional score models (AR-CSM) where we parameterize the joint distribution in terms of the derivatives of univariable log-conditionals (scores)
For AR-CSM models, this divergence between data and model distributions can be computed and optimized efficiently, requiring no expensive sampling or adversarial training.
We show with extensive experimental results that it can be applied to density estimation on synthetic data, image generation, image denoising, and training latent variable models with implicit encoders.
arXiv Detail & Related papers (2020-10-24T07:01:24Z) - Robust Optimal Transport with Applications in Generative Modeling and
Domain Adaptation [120.69747175899421]
Optimal Transport (OT) distances such as Wasserstein have been used in several areas such as GANs and domain adaptation.
We propose a computationally-efficient dual form of the robust OT optimization that is amenable to modern deep learning applications.
Our approach can train state-of-the-art GAN models on noisy datasets corrupted with outlier distributions.
arXiv Detail & Related papers (2020-10-12T17:13:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.