Constrained Diffusion with Trust Sampling
- URL: http://arxiv.org/abs/2411.10932v1
- Date: Sun, 17 Nov 2024 01:34:57 GMT
- Title: Constrained Diffusion with Trust Sampling
- Authors: William Huang, Yifeng Jiang, Tom Van Wouwe, C. Karen Liu,
- Abstract summary: We rethink training-free loss-guided diffusion from an optimization perspective.
Trust sampling effectively balances following the unconditional diffusion model and adhering to the loss guidance.
We demonstrate the efficacy of our method through extensive experiments on complex tasks, and in drastically different domains of images and 3D motion generation.
- Score: 11.354281911272864
- License:
- Abstract: Diffusion models have demonstrated significant promise in various generative tasks; however, they often struggle to satisfy challenging constraints. Our approach addresses this limitation by rethinking training-free loss-guided diffusion from an optimization perspective. We formulate a series of constrained optimizations throughout the inference process of a diffusion model. In each optimization, we allow the sample to take multiple steps along the gradient of the proxy constraint function until we can no longer trust the proxy, according to the variance at each diffusion level. Additionally, we estimate the state manifold of diffusion model to allow for early termination when the sample starts to wander away from the state manifold at each diffusion step. Trust sampling effectively balances between following the unconditional diffusion model and adhering to the loss guidance, enabling more flexible and accurate constrained generation. We demonstrate the efficacy of our method through extensive experiments on complex tasks, and in drastically different domains of images and 3D motion generation, showing significant improvements over existing methods in terms of generation quality. Our implementation is available at https://github.com/will-s-h/trust-sampling.
Related papers
- Training-free Diffusion Model Alignment with Sampling Demons [15.400553977713914]
We propose an optimization approach, dubbed Demon, to guide the denoising process at inference time without backpropagation through reward functions or model retraining.
Our approach works by controlling noise distribution in denoising steps to concentrate density on regions corresponding to high rewards through optimization.
To the best of our knowledge, the proposed approach is the first inference-time, backpropagation-free preference alignment method for diffusion models.
arXiv Detail & Related papers (2024-10-08T07:33:49Z) - Diffusion State-Guided Projected Gradient for Inverse Problems [82.24625224110099]
We propose Diffusion State-Guided Projected Gradient (DiffStateGrad) for inverse problems.
DiffStateGrad projects the measurement gradient onto a subspace that is a low-rank approximation of an intermediate state of the diffusion process.
We highlight that DiffStateGrad improves the robustness of diffusion models in terms of the choice of measurement guidance step size and noise.
arXiv Detail & Related papers (2024-10-04T14:26:54Z) - Intention-aware Denoising Diffusion Model for Trajectory Prediction [14.524496560759555]
Trajectory prediction is an essential component in autonomous driving, particularly for collision avoidance systems.
We propose utilizing the diffusion model to generate the distribution of future trajectories.
We propose an Intention-aware denoising Diffusion Model (IDM)
Our methods achieve state-of-the-art results, with an FDE of 13.83 pixels on the SDD dataset and 0.36 meters on the ETH/UCY dataset.
arXiv Detail & Related papers (2024-03-14T09:05:25Z) - Manifold Preserving Guided Diffusion [121.97907811212123]
Conditional image generation still faces challenges of cost, generalizability, and the need for task-specific training.
We propose Manifold Preserving Guided Diffusion (MPGD), a training-free conditional generation framework.
arXiv Detail & Related papers (2023-11-28T02:08:06Z) - Semi-Implicit Denoising Diffusion Models (SIDDMs) [50.30163684539586]
Existing models such as Denoising Diffusion Probabilistic Models (DDPM) deliver high-quality, diverse samples but are slowed by an inherently high number of iterative steps.
We introduce a novel approach that tackles the problem by matching implicit and explicit factors.
We demonstrate that our proposed method obtains comparable generative performance to diffusion-based models and vastly superior results to models with a small number of sampling steps.
arXiv Detail & Related papers (2023-06-21T18:49:22Z) - Eliminating Lipschitz Singularities in Diffusion Models [51.806899946775076]
We show that diffusion models frequently exhibit the infinite Lipschitz near the zero point of timesteps.
This poses a threat to the stability and accuracy of the diffusion process, which relies on integral operations.
We propose a novel approach, dubbed E-TSDM, which eliminates the Lipschitz of the diffusion model near zero.
arXiv Detail & Related papers (2023-06-20T03:05:28Z) - Conffusion: Confidence Intervals for Diffusion Models [32.36217153362305]
Current diffusion-based methods do not provide statistical guarantees regarding the generated results.
We propose Conffusion, wherein we fine-tune a pre-trained diffusion model to predict interval bounds in a single forward pass.
We show that Conffusion outperforms the baseline method while being three orders of magnitude faster.
arXiv Detail & Related papers (2022-11-17T18:58:15Z) - DensePure: Understanding Diffusion Models towards Adversarial Robustness [110.84015494617528]
We analyze the properties of diffusion models and establish the conditions under which they can enhance certified robustness.
We propose a new method DensePure, designed to improve the certified robustness of a pretrained model (i.e. a classifier)
We show that this robust region is a union of multiple convex sets, and is potentially much larger than the robust regions identified in previous works.
arXiv Detail & Related papers (2022-11-01T08:18:07Z) - Improving Diffusion Models for Inverse Problems using Manifold Constraints [55.91148172752894]
We show that current solvers throw the sample path off the data manifold, and hence the error accumulates.
To address this, we propose an additional correction term inspired by the manifold constraint.
We show that our method is superior to the previous methods both theoretically and empirically.
arXiv Detail & Related papers (2022-06-02T09:06:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.