EnfoPath: Energy-Informed Analysis of Generative Trajectories in Flow Matching
- URL: http://arxiv.org/abs/2511.19087v1
- Date: Mon, 24 Nov 2025 13:27:41 GMT
- Title: EnfoPath: Energy-Informed Analysis of Generative Trajectories in Flow Matching
- Authors: Ziyun Li, Ben Dai, Huancheng Hu, Henrik Boström, Soon Hoe Lim,
- Abstract summary: Flow-based generative models synthesize data by integrating a learned velocity field from a reference distribution to the target data distribution.<n>Motivated by classical mechanics, we introduce kinetic path energy (KPE), a simple yet powerful diagnostic that quantifies the total kinetic effort along each generation path of samplers.
- Score: 10.646391583250729
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Flow-based generative models synthesize data by integrating a learned velocity field from a reference distribution to the target data distribution. Prior work has focused on endpoint metrics (e.g., fidelity, likelihood, perceptual quality) while overlooking a deeper question: what do the sampling trajectories reveal? Motivated by classical mechanics, we introduce kinetic path energy (KPE), a simple yet powerful diagnostic that quantifies the total kinetic effort along each generation path of ODE-based samplers. Through comprehensive experiments on CIFAR-10 and ImageNet-256, we uncover two key phenomena: ({i}) higher KPE predicts stronger semantic quality, indicating that semantically richer samples require greater kinetic effort, and ({ii}) higher KPE inversely correlates with data density, with informative samples residing in sparse, low-density regions. Together, these findings reveal that semantically informative samples naturally reside on the sparse frontier of the data distribution, demanding greater generative effort. Our results suggest that trajectory-level analysis offers a physics-inspired and interpretable framework for understanding generation difficulty and sample characteristics.
Related papers
- A Kinetic-Energy Perspective of Flow Matching [23.42786172624299]
Flow-based generative models can be viewed through a physics lens.<n>Motivated by classical mechanics, we introduce Kinetic Path Energy (KPE)<n>We show that extreme energies drive trajectories toward near-copies of training examples.
arXiv Detail & Related papers (2026-02-08T11:51:50Z) - Towards Syn-to-Real IQA: A Novel Perspective on Reshaping Synthetic Data Distributions [74.00222571094437]
Blind Image Quality Assessment (BIQA) has advanced significantly through deep learning, but the scarcity of large-scale labeled datasets remains a challenge.<n>We make a key observation that representations learned from synthetic datasets often exhibit a discrete and clustered pattern that hinders regression performance.<n>We introduce a novel framework SynDR-IQA, which reshapes synthetic data distribution to enhance BIQA generalization.
arXiv Detail & Related papers (2026-01-01T06:11:16Z) - Latent Representation Learning in Heavy-Ion Collisions with MaskPoint Transformer [2.6610943214001765]
We introduce a Transformer-based autoencoder trained with a two-stage paradigm: self-supervised pre-training followed by supervised fine-tuning.<n>The encoder learns latent representations directly from unlabeled HIC data, providing a compact and information-rich feature space.<n>Results establish our two-stage framework as a general and robust foundation for feature learning in HIC, opening the door to more powerful analyses of quark--gluon plasma properties.
arXiv Detail & Related papers (2025-10-08T06:27:10Z) - Mixture-of-Experts Graph Transformers for Interpretable Particle Collision Detection [36.56642608984189]
We propose a novel approach that combines a Graph Transformer model with Mixture-of-Expert layers to achieve high predictive performance.<n>We evaluate the model on simulated events from the ATLAS experiment, focusing on distinguishing rare Supersymmetric signal events.<n>This approach underscores the importance of explainability in machine learning methods applied to high energy physics.
arXiv Detail & Related papers (2025-01-06T23:28:19Z) - Flow Annealed Importance Sampling Bootstrap meets Differentiable Particle Physics [3.430001962400887]
We adopt an approach based on Flow Annealed importance sampling Bootstrap (FAB) that evaluates the differentiable target density during training.<n>We show that FAB reaches higher sampling efficiency with fewer target evaluations in high dimensions in comparison to other methods.
arXiv Detail & Related papers (2024-11-25T09:48:11Z) - Iterated Denoising Energy Matching for Sampling from Boltzmann Densities [109.23137009609519]
Iterated Denoising Energy Matching (iDEM)
iDEM alternates between (I) sampling regions of high model density from a diffusion-based sampler and (II) using these samples in our matching objective.
We show that the proposed approach achieves state-of-the-art performance on all metrics and trains $2-5times$ faster.
arXiv Detail & Related papers (2024-02-09T01:11:23Z) - Gradient-Based Feature Learning under Structured Data [57.76552698981579]
In the anisotropic setting, the commonly used spherical gradient dynamics may fail to recover the true direction.
We show that appropriate weight normalization that is reminiscent of batch normalization can alleviate this issue.
In particular, under the spiked model with a suitably large spike, the sample complexity of gradient-based training can be made independent of the information exponent.
arXiv Detail & Related papers (2023-09-07T16:55:50Z) - A Geometric Perspective on Diffusion Models [57.27857591493788]
We inspect the ODE-based sampling of a popular variance-exploding SDE.
We establish a theoretical relationship between the optimal ODE-based sampling and the classic mean-shift (mode-seeking) algorithm.
arXiv Detail & Related papers (2023-05-31T15:33:16Z) - Revisiting the Evaluation of Image Synthesis with GANs [55.72247435112475]
This study presents an empirical investigation into the evaluation of synthesis performance, with generative adversarial networks (GANs) as a representative of generative models.
In particular, we make in-depth analyses of various factors, including how to represent a data point in the representation space, how to calculate a fair distance using selected samples, and how many instances to use from each set.
arXiv Detail & Related papers (2023-04-04T17:54:32Z) - CAFE: Learning to Condense Dataset by Aligning Features [72.99394941348757]
We propose a novel scheme to Condense dataset by Aligning FEatures (CAFE)
At the heart of our approach is an effective strategy to align features from the real and synthetic data across various scales.
We validate the proposed CAFE across various datasets, and demonstrate that it generally outperforms the state of the art.
arXiv Detail & Related papers (2022-03-03T05:58:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.