ReQFlow: Rectified Quaternion Flow for Efficient and High-Quality Protein Backbone Generation
- URL: http://arxiv.org/abs/2502.14637v2
- Date: Sat, 29 Mar 2025 07:16:54 GMT
- Title: ReQFlow: Rectified Quaternion Flow for Efficient and High-Quality Protein Backbone Generation
- Authors: Angxiao Yue, Zichong Wang, Hongteng Xu,
- Abstract summary: We propose a novel rectified quaternion flow (ReQFlow) matching method for fast and high-quality protein backbone generation.<n>Our method generates a local translation and a 3D rotation from random noise for each residue in a protein chain.<n> Experiments show that ReQFlow achieves state-of-the-art performance in protein backbone generation.
- Score: 24.13216117355207
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Protein backbone generation plays a central role in de novo protein design and is significant for many biological and medical applications. Although diffusion and flow-based generative models provide potential solutions to this challenging task, they often generate proteins with undesired designability and suffer computational inefficiency. In this study, we propose a novel rectified quaternion flow (ReQFlow) matching method for fast and high-quality protein backbone generation. In particular, our method generates a local translation and a 3D rotation from random noise for each residue in a protein chain, which represents each 3D rotation as a unit quaternion and constructs its flow by spherical linear interpolation (SLERP) in an exponential format. We train the model by quaternion flow (QFlow) matching with guaranteed numerical stability and rectify the QFlow model to accelerate its inference and improve the designability of generated protein backbones, leading to the proposed ReQFlow model. Experiments show that ReQFlow achieves state-of-the-art performance in protein backbone generation while requiring much fewer sampling steps and significantly less inference time (e.g., being 37x faster than RFDiffusion and 62x faster than Genie2 when generating a backbone of length 300), demonstrating its effectiveness and efficiency. The code is available at https://github.com/AngxiaoYue/ReQFlow.
Related papers
- ProtFlow: Fast Protein Sequence Design via Flow Matching on Compressed Protein Language Model Embeddings [8.068149785650649]
ProtFlow is a fast flow matching-based protein sequence design framework.
By compressing and smoothing the latent space, ProtFlow enhances performance while training on limited computational resources.
We evaluate ProtFlow across diverse protein design tasks, including general peptides and long-chain proteins, antimicrobial peptides, and antibodies.
arXiv Detail & Related papers (2025-04-15T08:46:53Z) - Reward-Guided Iterative Refinement in Diffusion Models at Test-Time with Applications to Protein and DNA Design [87.58981407469977]
We propose a novel framework for inference-time reward optimization with diffusion models inspired by evolutionary algorithms.
Our approach employs an iterative refinement process consisting of two steps in each iteration: noising and reward-guided denoising.
arXiv Detail & Related papers (2025-02-20T17:48:45Z) - P2DFlow: A Protein Ensemble Generative Model with SE(3) Flow Matching [8.620021796568087]
P2DFlow is a generative model based on SE(3) flow matching to predict the structural ensembles of proteins.
When trained and evaluated on MD datasets of ATLAS, P2DFlow outperforms other baseline models.
As a potential proxy agent for protein molecular simulation, the high-quality ensembles generated by P2DFlow could significantly aid in understanding protein functions across various scenarios.
arXiv Detail & Related papers (2024-11-26T08:10:12Z) - FlowTurbo: Towards Real-time Flow-Based Image Generation with Velocity Refiner [70.90505084288057]
Flow-based models tend to produce a straighter sampling trajectory during the sampling process.
We introduce several techniques including a pseudo corrector and sample-aware compilation to further reduce inference time.
FlowTurbo reaches an FID of 2.12 on ImageNet with 100 (ms / img) and FID of 3.93 with 38 (ms / img)
arXiv Detail & Related papers (2024-09-26T17:59:51Z) - Improving AlphaFlow for Efficient Protein Ensembles Generation [64.10918970280603]
We propose a feature-conditioned generative model called AlphaFlow-Lit to realize efficient protein ensembles generation.
AlphaFlow-Lit performs on-par with AlphaFlow and surpasses its distilled version without pretraining, all while achieving a significant sampling acceleration of around 47 times.
arXiv Detail & Related papers (2024-07-08T13:36:43Z) - PeRFlow: Piecewise Rectified Flow as Universal Plug-and-Play Accelerator [73.80050807279461]
Piecewise Rectified Flow (PeRFlow) is a flow-based method for accelerating diffusion models.
PeRFlow achieves superior performance in a few-step generation.
arXiv Detail & Related papers (2024-05-13T07:10:53Z) - Mixed Continuous and Categorical Flow Matching for 3D De Novo Molecule Generation [0.0]
Flow matching is a recently proposed generative modeling framework that generalizes diffusion models.
We extend the flow matching framework to categorical data by constructing flows that are constrained to exist on a continuous representation of categorical data known as the probability simplex.
We find that, in practice, a simpler approach that makes no accommodations for the categorical nature of the data yields equivalent or superior performance.
arXiv Detail & Related papers (2024-04-30T17:37:21Z) - SE(3)-Stochastic Flow Matching for Protein Backbone Generation [54.951832422425454]
We introduce FoldFlow, a series of novel generative models of increasing modeling power based on the flow-matching paradigm over $3mathrmD$ rigid motions.
Our family of FoldFlowgenerative models offers several advantages over previous approaches to the generative modeling of proteins.
arXiv Detail & Related papers (2023-10-03T19:24:24Z) - GMFlow: Learning Optical Flow via Global Matching [124.57850500778277]
We propose a GMFlow framework for learning optical flow estimation.
It consists of three main components: a customized Transformer for feature enhancement, a correlation and softmax layer for global feature matching, and a self-attention layer for flow propagation.
Our new framework outperforms 32-iteration RAFT's performance on the challenging Sintel benchmark.
arXiv Detail & Related papers (2021-11-26T18:59:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.