ResEnsemble-DDPM: Residual Denoising Diffusion Probabilistic Models for
Ensemble Learning
- URL: http://arxiv.org/abs/2312.01682v1
- Date: Mon, 4 Dec 2023 07:14:20 GMT
- Title: ResEnsemble-DDPM: Residual Denoising Diffusion Probabilistic Models for
Ensemble Learning
- Authors: Shi Zhenning, Dong Changsheng, Xie Xueshuo, Pan Bin, He Along, Li Tao
- Abstract summary: We propose ResEnsemble-DDPM, which seamlessly integrates the diffusion model and the end-to-end model through ensemble learning.
Experimental results demonstrate that our ResEnsemble-DDPM can further improve the capabilities of existing models.
- Score: 3.2564047163418754
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nowadays, denoising diffusion probabilistic models have been adapted for many
image segmentation tasks. However, existing end-to-end models have already
demonstrated remarkable capabilities. Rather than using denoising diffusion
probabilistic models alone, integrating the abilities of both denoising
diffusion probabilistic models and existing end-to-end models can better
improve the performance of image segmentation. Based on this, we implicitly
introduce residual term into the diffusion process and propose
ResEnsemble-DDPM, which seamlessly integrates the diffusion model and the
end-to-end model through ensemble learning. The output distributions of these
two models are strictly symmetric with respect to the ground truth
distribution, allowing us to integrate the two models by reducing the residual
term. Experimental results demonstrate that our ResEnsemble-DDPM can further
improve the capabilities of existing models. Furthermore, its ensemble learning
strategy can be generalized to other downstream tasks in image generation and
get strong competitiveness.
Related papers
- Model Integrity when Unlearning with T2I Diffusion Models [11.321968363411145]
We propose approximate Machine Unlearning algorithms to reduce the generation of specific types of images, characterized by samples from a forget distribution''
We then propose unlearning algorithms that demonstrate superior effectiveness in preserving model integrity compared to existing baselines.
arXiv Detail & Related papers (2024-11-04T13:15:28Z) - Energy-Based Diffusion Language Models for Text Generation [126.23425882687195]
Energy-based Diffusion Language Model (EDLM) is an energy-based model operating at the full sequence level for each diffusion step.
Our framework offers a 1.3$times$ sampling speedup over existing diffusion models.
arXiv Detail & Related papers (2024-10-28T17:25:56Z) - Inverse design with conditional cascaded diffusion models [0.0]
Adjoint-based design optimizations are usually computationally expensive and those costs scale with resolution.
We extend the use of diffusion models over traditional generative models by proposing the conditional cascaded diffusion model (cCDM)
Our study compares cCDM against a cGAN model with transfer learning.
While both models show decreased performance with reduced high-resolution training data, the cCDM loses its superiority to the cGAN model with transfer learning when training data is limited.
arXiv Detail & Related papers (2024-08-16T04:54:09Z) - Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution [67.9215891673174]
We propose score entropy as a novel loss that naturally extends score matching to discrete spaces.
We test our Score Entropy Discrete Diffusion models on standard language modeling tasks.
arXiv Detail & Related papers (2023-10-25T17:59:12Z) - Soft Mixture Denoising: Beyond the Expressive Bottleneck of Diffusion
Models [76.46246743508651]
We show that current diffusion models actually have an expressive bottleneck in backward denoising.
We introduce soft mixture denoising (SMD), an expressive and efficient model for backward denoising.
arXiv Detail & Related papers (2023-09-25T12:03:32Z) - Steerable Conditional Diffusion for Out-of-Distribution Adaptation in Medical Image Reconstruction [75.91471250967703]
We introduce a novel sampling framework called Steerable Conditional Diffusion.
This framework adapts the diffusion model, concurrently with image reconstruction, based solely on the information provided by the available measurement.
We achieve substantial enhancements in out-of-distribution performance across diverse imaging modalities.
arXiv Detail & Related papers (2023-08-28T08:47:06Z) - Image Generation with Multimodal Priors using Denoising Diffusion
Probabilistic Models [54.1843419649895]
A major challenge in using generative models to accomplish this task is the lack of paired data containing all modalities and corresponding outputs.
We propose a solution based on a denoising diffusion probabilistic synthesis models to generate images under multi-model priors.
arXiv Detail & Related papers (2022-06-10T12:23:05Z) - PSD Representations for Effective Probability Models [117.35298398434628]
We show that a recently proposed class of positive semi-definite (PSD) models for non-negative functions is particularly suited to this end.
We characterize both approximation and generalization capabilities of PSD models, showing that they enjoy strong theoretical guarantees.
Our results open the way to applications of PSD models to density estimation, decision theory and inference.
arXiv Detail & Related papers (2021-06-30T15:13:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.