AdsorbFlow: energy-conditioned flow matching enables fast and realistic adsorbate placement
- URL: http://arxiv.org/abs/2602.19289v1
- Date: Sun, 22 Feb 2026 17:53:53 GMT
- Title: AdsorbFlow: energy-conditioned flow matching enables fast and realistic adsorbate placement
- Authors: Jiangjie Qiu, Wentao Li, Honghao Chen, Leyi Zhao, Xiaonan Wang,
- Abstract summary: We introduce AdsorbFlow, a deterministic generative model that learns an energy-conditioned vector field on the rigid-body configuration space of adsorbate translation.<n>On 50 out-of-distribution systems, AdsorbFlow retains 58.0% SR@10 with a MLFF-to-DFT gap of only 4 points.
- Score: 9.699345436140641
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Identifying low-energy adsorption geometries on catalytic surfaces is a practical bottleneck for computational heterogeneous catalysis: the difficulty lies not only in the cost of density functional theory (DFT) but in proposing initial placements that relax into the correct energy basins. Conditional denoising diffusion has improved success rates, yet requires $\sim$100 iterative steps per sample. Here we introduce AdsorbFlow, a deterministic generative model that learns an energy-conditioned vector field on the rigid-body configuration space of adsorbate translation and rotation via conditional flow matching. Energy information enters through classifier-free guidance conditioning -- not energy-gradient guidance -- and sampling reduces to integrating an ODE in as few as 5 steps. On OC20-Dense with full DFT single-point verification, AdsorbFlow with an EquiformerV2 backbone achieves 61.4% SR@10 and 34.1% SR@1 -- surpassing AdsorbDiff (31.8% SR@1, 41.0% SR@10) at every evaluation level and AdsorbML (47.7% SR@10) -- while using 20 times fewer generative steps and achieving the lowest anomaly rate among generative methods (6.8%). On 50 out-of-distribution systems, AdsorbFlow retains 58.0% SR@10 with a MLFF-to-DFT gap of only 4~percentage points. These results establish that deterministic transport is both faster and more accurate than stochastic denoising for adsorbate placement.
Related papers
- Function-Space Decoupled Diffusion for Forward and Inverse Modeling in Carbon Capture and Storage [65.51149575007149]
We present Fun-DDPS, a generative framework that combines function-space diffusion models with differentiable neural operator surrogates for both forward and inverse modeling.<n>Fun-DDPS produces physically consistent realizations free from the high-frequency artifacts observed in joint-state baselines.
arXiv Detail & Related papers (2026-02-12T18:58:12Z) - AIRE-Prune: Asymptotic Impulse-Response Energy for State Pruning in State Space Models [51.93574339176914]
AIRE-Prune is a post-training pruning method for state space models (SSMs)<n>It reduces each layer's state dimension by directly minimizing long-run output-energy distortion.<n>Across diverse benchmarks, AIRE-Prune reveals substantial redundancy in SISO and SSMs with average pruning of 60.8%, with average accuracy drop of 0.29% without retraining, while significantly lowering compute.
arXiv Detail & Related papers (2026-01-31T06:03:43Z) - DeFloMat: Detection with Flow Matching for Stable and Efficient Generative Object Localization [0.5872014229110213]
DeFloMat is a novel generative object detection framework.<n>It addresses the critical latency bottleneck of diffusion-based detectors.<n>DeFloMat achieves state-of-the-art accuracy ($43.32% text AP_10:50$) in only $3$ inference steps.
arXiv Detail & Related papers (2025-12-26T23:07:40Z) - DiffusionNFT: Online Diffusion Reinforcement with Forward Process [99.94852379720153]
Diffusion Negative-aware FineTuning (DiffusionNFT) is a new online RL paradigm that optimize diffusion models directly on the forward process via flow matching.<n>DiffusionNFT is up to $25times$ more efficient than FlowGRPO in head-to-head comparisons, while being CFG-free.
arXiv Detail & Related papers (2025-09-19T16:09:33Z) - SADA: Stability-guided Adaptive Diffusion Acceleration [24.250318487331228]
Diffusion models have achieved remarkable success in generative tasks but suffer from high computational costs.<n>Existing training-free acceleration strategies that reduce per-step computation cost, while effectively reducing sampling time, demonstrate low faithfulness.<n>We propose Stability-guided Adaptive Diffusion Acceleration (SADA), a novel paradigm that accelerates sampling of ODE-based generative models.
arXiv Detail & Related papers (2025-07-23T02:15:45Z) - Simultaneous Optimization of Efficiency and Degradation in Tunable HTL-Free Perovskite Solar Cells with MWCNT-Integrated Back Contact Using a Machine Learning-Derived Polynomial Regressor [0.8739101659113155]
Perovskite solar cells (PSCs) without a hole transport layer (HTL) offer a cost-effective and stable alternative to conventional architectures.<n>This study presents a machine learning (ML)-driven framework to optimize the efficiency and stability of HTL-free PSCs.
arXiv Detail & Related papers (2025-05-24T13:37:48Z) - Demonstration of an AI-driven workflow for dynamic x-ray spectroscopy [1.0046337269532102]
X-ray absorption near edge structure (XANES) spectroscopy is a powerful technique for characterizing the chemical state and symmetry of individual elements within materials.<n>While adaptive sampling methods exist for efficiently collecting spectroscopic data, they often lack domain-specific knowledge about XANES spectra structure.<n>Here we demonstrate a knowledge-injected Bayesian optimization approach for adaptive XANES data collection.
arXiv Detail & Related papers (2025-04-23T22:32:42Z) - FlowTS: Time Series Generation via Rectified Flow [67.41208519939626]
FlowTS is an ODE-based model that leverages rectified flow with straight-line transport in probability space.<n>For unconditional setting, FlowTS achieves state-of-the-art performance, with context FID scores of 0.019 and 0.011 on Stock and ETTh datasets.<n>For conditional setting, we have achieved superior performance in solar forecasting.
arXiv Detail & Related papers (2024-11-12T03:03:23Z) - Adsorb-Agent: Autonomous Identification of Stable Adsorption Configurations via Large Language Model Agent [5.417632175667162]
Adsorb-Agent is a Large Language Model (LLM) agent designed to efficiently identify stable adsorbate-catalyst configurations.<n>Tested on twenty diverse systems, Adsorb-Agent identifies comparable adsorbate-catalyst energies for 84% of cases and achieves lower energies for 35%.
arXiv Detail & Related papers (2024-10-22T03:19:16Z) - Flow Matching for Accelerated Simulation of Atomic Transport in Crystalline Materials [6.6716708904054896]
Atomic transport underpins the performance of materials in technologies such as energy storage and electronics.<n>We introduce LiFlow, a generative framework to accelerate MD simulations for crystalline materials.<n>We benchmark LiFlow on a dataset comprising 25-ps trajectories of lithium diffusion across 4,186 SSE candidates at four temperatures.
arXiv Detail & Related papers (2024-10-02T12:16:46Z) - SCott: Accelerating Diffusion Models with Stochastic Consistency Distillation [74.32186107058382]
We propose Consistency Distillation (SCott) to enable accelerated text-to-image generation.<n>SCott distills the ordinary differential equation solvers-based sampling process of a pre-trained teacher model into a student.<n>On the MSCOCO-2017 5K dataset with a Stable Diffusion-V1.5 teacher, SCott achieves an FID of 21.9 with 2 sampling steps, surpassing that of the 1-step InstaFlow (23.4) and the 4-step UFOGen (22.1)
arXiv Detail & Related papers (2024-03-03T13:08:32Z) - Consistency Trajectory Models: Learning Probability Flow ODE Trajectory of Diffusion [56.38386580040991]
Consistency Trajectory Model (CTM) is a generalization of Consistency Models (CM)
CTM enables the efficient combination of adversarial training and denoising score matching loss to enhance performance.
Unlike CM, CTM's access to the score function can streamline the adoption of established controllable/conditional generation methods.
arXiv Detail & Related papers (2023-10-01T05:07:17Z) - AdsorbML: A Leap in Efficiency for Adsorption Energy Calculations using
Generalizable Machine Learning Potentials [8.636519538557001]
We show machine learning potentials can be leveraged to identify low energy adsorbate-surface configurations more accurately and efficiently.
Our algorithm provides a spectrum of trade-offs between accuracy and efficiency, with one balanced option finding the lowest energy configuration 87.36% of the time.
arXiv Detail & Related papers (2022-11-29T18:54:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.