Adaptive Multimodal Protein Plug-and-Play with Diffusion-Based Priors
- URL: http://arxiv.org/abs/2507.21260v1
- Date: Mon, 28 Jul 2025 18:28:03 GMT
- Title: Adaptive Multimodal Protein Plug-and-Play with Diffusion-Based Priors
- Authors: Amartya Banerjee, Xingyu Xu, Caroline Moosmüller, Harlin Lee,
- Abstract summary: In an inverse problem, the goal is to recover an unknown parameter that has typically undergone some lossy or noisy transformation during measurement.<n>Recently, deep generative models, particularly diffusion models, have emerged as powerful priors for protein structure generation.<n>We introduce Adam-, a Plug-and-Play framework that guides a pre-trained protein diffusion model using gradients from multiple, heterogeneous experimental sources.
- Score: 5.809784853115825
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In an inverse problem, the goal is to recover an unknown parameter (e.g., an image) that has typically undergone some lossy or noisy transformation during measurement. Recently, deep generative models, particularly diffusion models, have emerged as powerful priors for protein structure generation. However, integrating noisy experimental data from multiple sources to guide these models remains a significant challenge. Existing methods often require precise knowledge of experimental noise levels and manually tuned weights for each data modality. In this work, we introduce Adam-PnP, a Plug-and-Play framework that guides a pre-trained protein diffusion model using gradients from multiple, heterogeneous experimental sources. Our framework features an adaptive noise estimation scheme and a dynamic modality weighting mechanism integrated into the diffusion process, which reduce the need for manual hyperparameter tuning. Experiments on complex reconstruction tasks demonstrate significantly improved accuracy using Adam-PnP.
Related papers
- Spectral Regularization for Diffusion Models [14.919876123456747]
We propose a loss-level spectral regularization framework that augments standard diffusion training with differentiable Fourier- and wavelet-domain losses.<n>Our approach is compatible with DDPM, DDIM, and EDM formulations and introduces negligible computational overhead.
arXiv Detail & Related papers (2026-03-02T22:39:02Z) - Automated Tuning for Diffusion Inverse Problem Solvers without Generative Prior Retraining [4.511561231517167]
Diffusion/score-based models have emerged as powerful generative priors for solving inverse problems.<n>We propose Zero-shot Adaptive Diffusion Sampling (ZADS), a test-time optimization method that tunes fidelity weights across arbitrary noise schedules.<n>ZADS consistently outperforms both traditional compressed sensing and recent diffusion-based methods.
arXiv Detail & Related papers (2025-09-11T22:22:32Z) - One-Step Diffusion Model for Image Motion-Deblurring [85.76149042561507]
We propose a one-step diffusion model for deblurring (OSDD), a novel framework that reduces the denoising process to a single step.<n>To tackle fidelity loss in diffusion models, we introduce an enhanced variational autoencoder (eVAE), which improves structural restoration.<n>Our method achieves strong performance on both full and no-reference metrics.
arXiv Detail & Related papers (2025-03-09T09:39:57Z) - Generalized Diffusion Model with Adjusted Offset Noise [1.7767466724342067]
We propose a generalized diffusion model that naturally incorporates additional noise within a rigorous probabilistic framework.<n>We derive a loss function based on the evidence lower bound, establishing its theoretical equivalence to offset noise with certain adjustments.<n>Experiments on synthetic datasets demonstrate that our model effectively addresses brightness-related challenges and outperforms conventional methods in high-dimensional scenarios.
arXiv Detail & Related papers (2024-12-04T08:57:03Z) - Heuristically Adaptive Diffusion-Model Evolutionary Strategy [1.8299322342860518]
Diffusion Models represent a significant advancement in generative modeling.
Our research reveals a fundamental connection between diffusion models and evolutionary algorithms.
Our framework marks a major algorithmic transition, offering increased flexibility, precision, and control in evolutionary optimization processes.
arXiv Detail & Related papers (2024-11-20T16:06:28Z) - One More Step: A Versatile Plug-and-Play Module for Rectifying Diffusion
Schedule Flaws and Enhancing Low-Frequency Controls [77.42510898755037]
One More Step (OMS) is a compact network that incorporates an additional simple yet effective step during inference.
OMS elevates image fidelity and harmonizes the dichotomy between training and inference, while preserving original model parameters.
Once trained, various pre-trained diffusion models with the same latent domain can share the same OMS module.
arXiv Detail & Related papers (2023-11-27T12:02:42Z) - Diffusion Reconstruction of Ultrasound Images with Informative
Uncertainty [5.375425938215277]
Enhancing ultrasound image quality involves balancing concurrent factors like contrast, resolution, and speckle preservation.
We propose a hybrid approach leveraging advances in diffusion models.
We conduct comprehensive experiments on simulated, in-vitro, and in-vivo data, demonstrating the efficacy of our approach.
arXiv Detail & Related papers (2023-10-31T16:51:40Z) - SMRD: SURE-based Robust MRI Reconstruction with Diffusion Models [76.43625653814911]
Diffusion models have gained popularity for accelerated MRI reconstruction due to their high sample quality.
They can effectively serve as rich data priors while incorporating the forward model flexibly at inference time.
We introduce SURE-based MRI Reconstruction with Diffusion models (SMRD) to enhance robustness during testing.
arXiv Detail & Related papers (2023-10-03T05:05:35Z) - Understanding Pathologies of Deep Heteroskedastic Regression [25.509884677111344]
Heteroskedastic models predict both mean and residual noise for each data point.
At one extreme, these models fit all training data perfectly, eliminating residual noise entirely.
At the other, they overfit the residual noise while predicting a constant, uninformative mean.
We observe a lack of middle ground, suggesting a phase transition dependent on model regularization strength.
arXiv Detail & Related papers (2023-06-29T06:31:27Z) - Data Augmentation for Seizure Prediction with Generative Diffusion Model [34.12334834099495]
We propose a novel diffusion-based DA method called DiffEEG.<n>It can fully explore data distribution and generate samples with high diversity.<n>With the contribution of DiffEEG, the Multi-scale CNN achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-06-14T05:44:53Z) - DiffusionAD: Norm-guided One-step Denoising Diffusion for Anomaly Detection [80.20339155618612]
DiffusionAD is a novel anomaly detection pipeline comprising a reconstruction sub-network and a segmentation sub-network.<n>A rapid one-step denoising paradigm achieves hundreds of times acceleration while preserving comparable reconstruction quality.<n>Considering the diversity in the manifestation of anomalies, we propose a norm-guided paradigm to integrate the benefits of multiple noise scales.
arXiv Detail & Related papers (2023-03-15T16:14:06Z) - Diffusion Model Based Posterior Sampling for Noisy Linear Inverse Problems [14.809545109705256]
This paper presents a fast and effective solution by proposing a simple closed-form approximation to the likelihood score.
For both diffusion and flow-based models, extensive experiments are conducted on various noisy linear inverse problems.
Our method demonstrates highly competitive or even better reconstruction performances while being significantly faster than all the baseline methods.
arXiv Detail & Related papers (2022-11-20T01:09:49Z) - Diffusion Posterior Sampling for General Noisy Inverse Problems [50.873313752797124]
We extend diffusion solvers to handle noisy (non)linear inverse problems via approximation of the posterior sampling.
Our method demonstrates that diffusion models can incorporate various measurement noise statistics.
arXiv Detail & Related papers (2022-09-29T11:12:27Z) - SUOD: Accelerating Large-Scale Unsupervised Heterogeneous Outlier
Detection [63.253850875265115]
Outlier detection (OD) is a key machine learning (ML) task for identifying abnormal objects from general samples.
We propose a modular acceleration system, called SUOD, to address it.
arXiv Detail & Related papers (2020-03-11T00:22:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.