Flow-Opt: Scalable Centralized Multi-Robot Trajectory Optimization with Flow Matching and Differentiable Optimization
- URL: http://arxiv.org/abs/2510.09204v1
- Date: Fri, 10 Oct 2025 09:43:18 GMT
- Title: Flow-Opt: Scalable Centralized Multi-Robot Trajectory Optimization with Flow Matching and Differentiable Optimization
- Authors: Simon Idoko, Arun Kumar Singh,
- Abstract summary: Flow-Opt is a learning-based approach towards improving the computational tractability of centralized multi-robot trajectory optimization.<n>We show that we can generate trajectories of tens of robots in cluttered environments in a few tens of milliseconds.<n>Our approach also generates smoother trajectories orders of magnitude faster than competing baselines.
- Score: 2.4149533870085174
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Centralized trajectory optimization in the joint space of multiple robots allows access to a larger feasible space that can result in smoother trajectories, especially while planning in tight spaces. Unfortunately, it is often computationally intractable beyond a very small swarm size. In this paper, we propose Flow-Opt, a learning-based approach towards improving the computational tractability of centralized multi-robot trajectory optimization. Specifically, we reduce the problem to first learning a generative model to sample different candidate trajectories and then using a learned Safety-Filter(SF) to ensure fast inference-time constraint satisfaction. We propose a flow-matching model with a diffusion transformer (DiT) augmented with permutation invariant robot position and map encoders as the generative model. We develop a custom solver for our SF and equip it with a neural network that predicts context-specific initialization. The initialization network is trained in a self-supervised manner, taking advantage of the differentiability of the SF solver. We advance the state-of-the-art in the following respects. First, we show that we can generate trajectories of tens of robots in cluttered environments in a few tens of milliseconds. This is several times faster than existing centralized optimization approaches. Moreover, our approach also generates smoother trajectories orders of magnitude faster than competing baselines based on diffusion models. Second, each component of our approach can be batched, allowing us to solve a few tens of problem instances in a fraction of a second. We believe this is a first such result; no existing approach provides such capabilities. Finally, our approach can generate a diverse set of trajectories between a given set of start and goal locations, which can capture different collision-avoidance behaviors.
Related papers
- Discrete-Guided Diffusion for Scalable and Safe Multi-Robot Motion Planning [56.240199425429445]
Multi-Robot Motion Planning (MPMP) involves generating trajectories for multiple robots operating in a shared continuous workspace.<n>While discrete multi-agent finding (MAPF) methods are broadly adopted due to their scalability, their coarse discretization trajectory quality.<n>This paper tackles limitations of two approaches by introducing discrete MAPF solvers with constrained generative diffusion models.
arXiv Detail & Related papers (2025-08-27T17:59:36Z) - Swarm-Gen: Fast Generation of Diverse Feasible Swarm Behaviors [9.77508443944225]
Coordination behavior in robot swarms is inherently multi-modal in nature.<n>In this paper, we combine generative models with a safety-filter (SF) to generate diverse and feasible swarm behaviors.<n>We show that we can generate a large set of multi-modal, feasible trajectories, simulating diverse swarm behaviors, within a few tens of milliseconds.
arXiv Detail & Related papers (2025-01-31T11:13:09Z) - An Efficient Diffusion-based Non-Autoregressive Solver for Traveling Salesman Problem [21.948190231334088]
We propose DEITSP, a diffusion model with efficient iterations tailored for Traveling Salesman Problems.<n>We introduce a one-step diffusion model that integrates the controlled discrete noise addition process with self-consistency enhancement.<n>We also design a dual-modality graph transformer to bolster the extraction and fusion of features from node and edge modalities.
arXiv Detail & Related papers (2025-01-23T15:47:04Z) - Multi-Agent Path Finding in Continuous Spaces with Projected Diffusion Models [57.45019514036948]
Multi-Agent Path Finding (MAPF) is a fundamental problem in robotics.<n>This work proposes a novel approach that integrates constrained optimization with diffusion models for MAPF in continuous spaces.
arXiv Detail & Related papers (2024-12-23T21:27:19Z) - DiffSG: A Generative Solver for Network Optimization with Diffusion Model [75.27274046562806]
Generative diffusion models are popular in various cross-domain applications.<n>These models hold promise in tackling complex network optimization problems.<n>We propose a new framework for generative diffusion models called Diffusion Model-based Solution Generation.
arXiv Detail & Related papers (2024-08-13T07:56:21Z) - Sports-Traj: A Unified Trajectory Generation Model for Multi-Agent Movement in Sports [53.637837706712794]
We propose a Unified Trajectory Generation model, UniTraj, that processes arbitrary trajectories as masked inputs.<n>Specifically, we introduce a Ghost Spatial Masking (GSM) module, embedded within a Transformer encoder, for spatial feature extraction.<n>We benchmark three practical sports datasets, Basketball-U, Football-U, and Soccer-U, for evaluation.
arXiv Detail & Related papers (2024-05-27T22:15:23Z) - Compositional Generative Inverse Design [69.22782875567547]
Inverse design, where we seek to design input variables in order to optimize an underlying objective function, is an important problem.
We show that by instead optimizing over the learned energy function captured by the diffusion model, we can avoid such adversarial examples.
In an N-body interaction task and a challenging 2D multi-airfoil design task, we demonstrate that by composing the learned diffusion model at test time, our method allows us to design initial states and boundary shapes.
arXiv Detail & Related papers (2024-01-24T01:33:39Z) - Tensor Train for Global Optimization Problems in Robotics [6.702251803443858]
The convergence of many numerical optimization techniques is highly dependent on the initial guess given to the solver.
We propose a novel approach that utilizes methods to initialize existing optimization solvers near global optima.
We show that the proposed method can generate samples close to global optima and from multiple modes.
arXiv Detail & Related papers (2022-06-10T13:18:26Z) - Joint inference and input optimization in equilibrium networks [68.63726855991052]
deep equilibrium model is a class of models that foregoes traditional network depth and instead computes the output of a network by finding the fixed point of a single nonlinear layer.
We show that there is a natural synergy between these two settings.
We demonstrate this strategy on various tasks such as training generative models while optimizing over latent codes, training models for inverse problems like denoising and inpainting, adversarial training and gradient based meta-learning.
arXiv Detail & Related papers (2021-11-25T19:59:33Z) - Error-Correcting Neural Networks for Semi-Lagrangian Advection in the
Level-Set Method [0.0]
We present a machine learning framework that blends image super-resolution technologies with scalar transport in the level-set method.
We investigate whether we can compute on-the-fly data-driven corrections to minimize numerical viscosity in the coarse-mesh evolution of an interface.
arXiv Detail & Related papers (2021-10-22T06:36:15Z) - Learning Constrained Distributions of Robot Configurations with
Generative Adversarial Network [15.962033896896385]
In high dimensional robotic system, the manifold of the valid configuration space often has a complex shape.
We propose a generative adversarial network approach to learn the distribution of valid robot configurations under such constraints.
arXiv Detail & Related papers (2020-11-11T11:43:54Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.