MFM-point: Multi-scale Flow Matching for Point Cloud Generation
- URL: http://arxiv.org/abs/2511.20041v1
- Date: Tue, 25 Nov 2025 08:10:56 GMT
- Title: MFM-point: Multi-scale Flow Matching for Point Cloud Generation
- Authors: Petr Molodyk, Jaemoo Choi, David W. Romero, Ming-Yu Liu, Yongxin Chen,
- Abstract summary: MFM-Point is a multi-scale Flow Matching framework for point cloud generation.<n>We show that MFM-Point achieves best-in-class performance among point-based methods.
- Score: 40.453079463837895
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, point cloud generation has gained significant attention in 3D generative modeling. Among existing approaches, point-based methods directly generate point clouds without relying on other representations such as latent features, meshes, or voxels. These methods offer low training cost and algorithmic simplicity, but often underperform compared to representation-based approaches. In this paper, we propose MFM-Point, a multi-scale Flow Matching framework for point cloud generation that substantially improves the scalability and performance of point-based methods while preserving their simplicity and efficiency. Our multi-scale generation algorithm adopts a coarse-to-fine generation paradigm, enhancing generation quality and scalability without incurring additional training or inference overhead. A key challenge in developing such a multi-scale framework lies in preserving the geometric structure of unordered point clouds while ensuring smooth and consistent distributional transitions across resolutions. To address this, we introduce a structured downsampling and upsampling strategy that preserves geometry and maintains alignment between coarse and fine resolutions. Our experimental results demonstrate that MFM-Point achieves best-in-class performance among point-based methods and challenges the best representation-based methods. In particular, MFM-point demonstrates strong results in multi-category and high-resolution generation tasks.
Related papers
- PUFM++: Point Cloud Upsampling via Enhanced Flow Matching [15.738247394527024]
PUFM++ is an enhanced flow-matching framework for reconstructing point clouds from sparse, noisy, and partial observations.<n>We introduce a two-stage flow-matching strategy that first learns a direct, straight-path flow from sparse inputs to dense targets, and then refines it using noise-perturbed samples to approximate the terminal marginal distribution better.<n>Experiments on synthetic benchmarks and real-world scans show that PUFM++ sets a new state of the art in point cloud upsampling.
arXiv Detail & Related papers (2025-12-24T06:30:42Z) - PointNSP: Autoregressive 3D Point Cloud Generation with Next-Scale Level-of-Detail Prediction [87.33016661440202]
Autoregressive point cloud generation has long lagged behind diffusion-based approaches in quality.<n>We propose PointNSP, a coarse-to-fine generative framework that preserves global shape structure at low resolutions.<n> Experiments on ShapeNet show that PointNSP establishes state-of-the-art (SOTA) generation quality for the first time within the autoregressive paradigm.
arXiv Detail & Related papers (2025-10-07T06:31:02Z) - Message-Passing Monte Carlo: Generating low-discrepancy point sets via Graph Neural Networks [64.39488944424095]
We present the first machine learning approach to generate low-discrepancy point sets named Message-Passing Monte Carlo points.
MPMC points are empirically shown to be either optimal or near-optimal with respect to the discrepancy for low dimension and small number of points.
arXiv Detail & Related papers (2024-05-23T21:17:20Z) - REPS: Reconstruction-based Point Cloud Sampling [37.10538035973968]
Deep downsampling methods can be classified into two main types: generative-based and score-based.
In this paper, we propose REPS, a reconstruction-based scoring strategy.
Our method outperforms previous approaches in preserving the structural features of the sampled point clouds.
arXiv Detail & Related papers (2024-03-08T04:48:56Z) - Fixed Point Diffusion Models [13.035518953879539]
Fixed Point Diffusion Model (FPDM) is a novel approach to image generation that integrates the concept of fixed point solving into the framework of diffusion-based generative modeling.
Our approach embeds an implicit fixed point solving layer into the denoising network of a diffusion model, transforming the diffusion process into a sequence of closely-related fixed point problems.
We conduct experiments with state-of-the-art models on ImageNet, FFHQ, CelebA-HQ, and LSUN-Church, demonstrating substantial improvements in performance and efficiency.
arXiv Detail & Related papers (2024-01-16T18:55:54Z) - GP-PCS: One-shot Feature-Preserving Point Cloud Simplification with Gaussian Processes on Riemannian Manifolds [2.8811433060309763]
We propose a novel, one-shot point cloud simplification method.
It preserves both the salient structural features and the overall shape of a point cloud without any prior surface reconstruction step.
We evaluate our method on several benchmark and self-acquired point clouds, compare it to a range of existing methods, demonstrate its application in downstream tasks of registration and surface reconstruction.
arXiv Detail & Related papers (2023-03-27T14:05:34Z) - Controllable Mesh Generation Through Sparse Latent Point Diffusion
Models [105.83595545314334]
We design a novel sparse latent point diffusion model for mesh generation.
Our key insight is to regard point clouds as an intermediate representation of meshes, and model the distribution of point clouds instead.
Our proposed sparse latent point diffusion model achieves superior performance in terms of generation quality and controllability.
arXiv Detail & Related papers (2023-03-14T14:25:29Z) - PU-Flow: a Point Cloud Upsampling Networkwith Normalizing Flows [58.96306192736593]
We present PU-Flow, which incorporates normalizing flows and feature techniques to produce dense points uniformly distributed on the underlying surface.
Specifically, we formulate the upsampling process as point in a latent space, where the weights are adaptively learned from local geometric context.
We show that our method outperforms state-of-the-art deep learning-based approaches in terms of reconstruction quality, proximity-to-surface accuracy, and computation efficiency.
arXiv Detail & Related papers (2021-07-13T07:45:48Z) - Deep Magnification-Flexible Upsampling over 3D Point Clouds [103.09504572409449]
We propose a novel end-to-end learning-based framework to generate dense point clouds.
We first formulate the problem explicitly, which boils down to determining the weights and high-order approximation errors.
Then, we design a lightweight neural network to adaptively learn unified and sorted weights as well as the high-order refinements.
arXiv Detail & Related papers (2020-11-25T14:00:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.