ScoreFusion: fusing score-based generative models via Kullback-Leibler barycenters
- URL: http://arxiv.org/abs/2406.19619v2
- Date: Fri, 18 Oct 2024 06:40:37 GMT
- Title: ScoreFusion: fusing score-based generative models via Kullback-Leibler barycenters
- Authors: Hao Liu, Junze Tony Ye, Jose Blanchet, Nian Si,
- Abstract summary: We introduce ScoreFusion, a theoretically grounded method for fusing multiple pre-trained diffusion models.
Our starting point considers the family of KL barycenters of the auxiliary populations, which is proven to be an optimal parametric class in the KL sense.
By recasting the learning problem as score matching in denoising diffusion, we obtain a tractable way of computing the optimal KL barycenter weights.
- Score: 8.08976346461518
- License:
- Abstract: We introduce ScoreFusion, a theoretically grounded method for fusing multiple pre-trained diffusion models that are assumed to generate from auxiliary populations. ScoreFusion is particularly useful for enhancing the generative modeling of a target population with limited observed data. Our starting point considers the family of KL barycenters of the auxiliary populations, which is proven to be an optimal parametric class in the KL sense, but difficult to learn. Nevertheless, by recasting the learning problem as score matching in denoising diffusion, we obtain a tractable way of computing the optimal KL barycenter weights. We prove a dimension-free sample complexity bound in total variation distance, provided that the auxiliary models are well fitted for their own task and the auxiliary tasks combined capture the target well. We also explain a connection of the practice of checkpoint merging in AI art creation to an approximation of our KL-barycenter-based fusion approach. However, our fusion method differs in key aspects, allowing generation of new populations, as we illustrate in experiments.
Related papers
- Interaction-Aware Gaussian Weighting for Clustered Federated Learning [58.92159838586751]
Federated Learning (FL) emerged as a decentralized paradigm to train models while preserving privacy.
We propose a novel clustered FL method, FedGWC (Federated Gaussian Weighting Clustering), which groups clients based on their data distribution.
Our experiments on benchmark datasets show that FedGWC outperforms existing FL algorithms in cluster quality and classification accuracy.
arXiv Detail & Related papers (2025-02-05T16:33:36Z) - The Unreasonable Effectiveness of Gaussian Score Approximation for Diffusion Models and its Applications [1.8416014644193066]
We compare learned neural scores to the scores of two kinds of analytically tractable distributions.
We claim that the learned neural score is dominated by its linear (Gaussian) approximation for moderate to high noise scales.
We show that this allows the skipping of the first 15-30% of sampling steps while maintaining high sample quality.
arXiv Detail & Related papers (2024-12-12T21:31:27Z) - A Diffusion Model Framework for Unsupervised Neural Combinatorial Optimization [7.378582040635655]
Current deep learning approaches rely on generative models that yield exact sample likelihoods.
This work introduces a method that lifts this restriction and opens the possibility to employ highly expressive latent variable models.
We experimentally validate our approach in data-free Combinatorial Optimization and demonstrate that our method achieves a new state-of-the-art on a wide range of benchmark problems.
arXiv Detail & Related papers (2024-06-03T17:55:02Z) - Collaborative Heterogeneous Causal Inference Beyond Meta-analysis [68.4474531911361]
We propose a collaborative inverse propensity score estimator for causal inference with heterogeneous data.
Our method shows significant improvements over the methods based on meta-analysis when heterogeneity increases.
arXiv Detail & Related papers (2024-04-24T09:04:36Z) - Improved off-policy training of diffusion samplers [93.66433483772055]
We study the problem of training diffusion models to sample from a distribution with an unnormalized density or energy function.
We benchmark several diffusion-structured inference methods, including simulation-based variational approaches and off-policy methods.
Our results shed light on the relative advantages of existing algorithms while bringing into question some claims from past work.
arXiv Detail & Related papers (2024-02-07T18:51:49Z) - Ensemble Modeling for Multimodal Visual Action Recognition [50.38638300332429]
We propose an ensemble modeling approach for multimodal action recognition.
We independently train individual modality models using a variant of focal loss tailored to handle the long-tailed distribution of the MECCANO [21] dataset.
arXiv Detail & Related papers (2023-08-10T08:43:20Z) - Diff-Instruct: A Universal Approach for Transferring Knowledge From
Pre-trained Diffusion Models [77.83923746319498]
We propose a framework called Diff-Instruct to instruct the training of arbitrary generative models.
We show that Diff-Instruct results in state-of-the-art single-step diffusion-based models.
Experiments on refining GAN models show that the Diff-Instruct can consistently improve the pre-trained generators of GAN models.
arXiv Detail & Related papers (2023-05-29T04:22:57Z) - Learning Data Representations with Joint Diffusion Models [20.25147743706431]
Joint machine learning models that allow synthesizing and classifying data often offer uneven performance between those tasks or are unstable to train.
We extend the vanilla diffusion model with a classifier that allows for stable joint end-to-end training with shared parameterization between those objectives.
The resulting joint diffusion model outperforms recent state-of-the-art hybrid methods in terms of both classification and generation quality on all evaluated benchmarks.
arXiv Detail & Related papers (2023-01-31T13:29:19Z) - A Decentralized Approach to Bayesian Learning [26.74338464389837]
Motivated by decentralized approaches to machine learning, we propose a collaborative learning taking the form of decentralized Langevin dynamics.
Our analysis show that the initial KL-divergence between the Markov Chain and the target posterior distribution is exponentially decreasing.
The performance of individual agents with locally available data is on par with the centralized setting with considerable improvement in the rate.
arXiv Detail & Related papers (2020-07-14T03:59:17Z) - Model Fusion with Kullback--Leibler Divergence [58.20269014662046]
We propose a method to fuse posterior distributions learned from heterogeneous datasets.
Our algorithm relies on a mean field assumption for both the fused model and the individual dataset posteriors.
arXiv Detail & Related papers (2020-07-13T03:27:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.