Reconstructing 3D Flow from 2D Data with Diffusion Transformer
- URL: http://arxiv.org/abs/2502.02593v1
- Date: Fri, 20 Dec 2024 13:19:48 GMT
- Title: Reconstructing 3D Flow from 2D Data with Diffusion Transformer
- Authors: Fan Lei,
- Abstract summary: We propose a Transformer-based method for reconstructing 3D flow fields from 2D PIV data.<n>By embedding the positional information of 2D planes into the model, we enable the reconstruction of 3D flow fields from any combination of 2D slices.<n>Our experiments demonstrate that our model can efficiently and accurately reconstruct 3D flow fields from 2D data, producing realistic results.
- Score: 0.6798775532273751
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fluid flow is a widely applied physical problem, crucial in various fields. Due to the highly nonlinear and chaotic nature of fluids, analyzing fluid-related problems is exceptionally challenging. Computational fluid dynamics (CFD) is the best tool for this analysis but involves significant computational resources, especially for 3D simulations, which are slow and resource-intensive. In experimental fluid dynamics, PIV cost increases with dimensionality. Reconstructing 3D flow fields from 2D PIV data could reduce costs and expand application scenarios. Here, We propose a Diffusion Transformer-based method for reconstructing 3D flow fields from 2D flow data. By embedding the positional information of 2D planes into the model, we enable the reconstruction of 3D flow fields from any combination of 2D slices, enhancing flexibility. We replace global attention with window and plane attention to reduce computational costs associated with higher dimensions without compromising performance. Our experiments demonstrate that our model can efficiently and accurately reconstruct 3D flow fields from 2D data, producing realistic results.
Related papers
- SparseFlex: High-Resolution and Arbitrary-Topology 3D Shape Modeling [79.56581753856452]
SparseFlex is a novel sparse-structured isosurface representation that enables differentiable mesh reconstruction at resolutions up to $10243$ directly from rendering losses.
By enabling high-resolution, differentiable mesh reconstruction and generation with rendering losses, SparseFlex significantly advances the state-of-the-art in 3D shape representation and modeling.
arXiv Detail & Related papers (2025-03-27T17:46:42Z) - Factorized Implicit Global Convolution for Automotive Computational Fluid Dynamics Prediction [52.32698071488864]
We propose Factorized Implicit Global Convolution (FIGConv), a novel architecture that efficiently solves CFD problems for very large 3D meshes.<n>FIGConv achieves quadratic complexity $O(N2)$, a significant improvement over existing 3D neural CFD models.<n>We validate our approach on the industry-standard Ahmed body dataset and the large-scale DrivAerNet dataset.
arXiv Detail & Related papers (2025-02-06T18:57:57Z) - Motion-aware 3D Gaussian Splatting for Efficient Dynamic Scene Reconstruction [89.53963284958037]
We propose a novel motion-aware enhancement framework for dynamic scene reconstruction.
Specifically, we first establish a correspondence between 3D Gaussian movements and pixel-level flow.
For the prevalent deformation-based paradigm that presents a harder optimization problem, a transient-aware deformation auxiliary module is proposed.
arXiv Detail & Related papers (2024-03-18T03:46:26Z) - From Zero to Turbulence: Generative Modeling for 3D Flow Simulation [45.626346087828765]
We propose to approach turbulent flow simulation as a generative task directly learning the manifold of all possible turbulent flow states without relying on any initial flow state.
Our generative model captures the distribution of turbulent flows caused by unseen objects and generates high-quality, realistic samples for downstream applications.
arXiv Detail & Related papers (2023-05-29T18:20:28Z) - FR3D: Three-dimensional Flow Reconstruction and Force Estimation for
Unsteady Flows Around Extruded Bluff Bodies via Conformal Mapping Aided
Convolutional Autoencoders [0.0]
We propose a convolutional autoencoder based neural network model, dubbed FR3D, which enables flow reconstruction.
We show that the FR3D model reconstructs pressure and velocity components with a few percentage points of error.
arXiv Detail & Related papers (2023-02-03T15:13:57Z) - DreamFusion: Text-to-3D using 2D Diffusion [52.52529213936283]
Recent breakthroughs in text-to-image synthesis have been driven by diffusion models trained on billions of image-text pairs.
In this work, we circumvent these limitations by using a pretrained 2D text-to-image diffusion model to perform text-to-3D synthesis.
Our approach requires no 3D training data and no modifications to the image diffusion model, demonstrating the effectiveness of pretrained image diffusion models as priors.
arXiv Detail & Related papers (2022-09-29T17:50:40Z) - Benchmarking of Deep Learning models on 2D Laminar Flow behind Cylinder [0.0]
Direct Numerical Simulation(DNS) is one of the tasks in Computational Fluid Dynamics.
We train these three models in an autoencoder manner, for this the dataset is treated like sequential frames given to the model as input.
We observe that recently introduced architecture called Transformer significantly outperforms its counterparts on the selected dataset.
arXiv Detail & Related papers (2022-05-26T16:49:09Z) - Positional Encoding Augmented GAN for the Assessment of Wind Flow for
Pedestrian Comfort in Urban Areas [0.41998444721319217]
This work rephrases the problem from computing 3D flow fields using CFD to a 2D image-to-image translation-based problem on the building footprints to predict the flow field at pedestrian height level.
We investigate the use of generative adversarial networks (GAN), such as Pix2Pix and CycleGAN representing state-of-the-art for image-to-image translation task in various domains.
arXiv Detail & Related papers (2021-12-15T19:37:11Z) - Data-Driven Shadowgraph Simulation of a 3D Object [50.591267188664666]
We are replacing the numerical code by a computationally cheaper projection based surrogate model.
The model is able to approximate the electric fields at a given time without computing all preceding electric fields as required by numerical methods.
This model has shown a good quality reconstruction in a problem of perturbation of data within a narrow range of simulation parameters and can be used for input data of large size.
arXiv Detail & Related papers (2021-06-01T08:46:04Z) - Displacement-Invariant Matching Cost Learning for Accurate Optical Flow
Estimation [109.64756528516631]
Learning matching costs have been shown to be critical to the success of the state-of-the-art deep stereo matching methods.
This paper proposes a novel solution that is able to bypass the requirement of building a 5D feature volume.
Our approach achieves state-of-the-art accuracy on various datasets, and outperforms all published optical flow methods on the Sintel benchmark.
arXiv Detail & Related papers (2020-10-28T09:57:00Z) - Cylinder3D: An Effective 3D Framework for Driving-scene LiDAR Semantic
Segmentation [87.54570024320354]
State-of-the-art methods for large-scale driving-scene LiDAR semantic segmentation often project and process the point clouds in the 2D space.
A straightforward solution to tackle the issue of 3D-to-2D projection is to keep the 3D representation and process the points in the 3D space.
We develop a 3D cylinder partition and a 3D cylinder convolution based framework, termed as Cylinder3D, which exploits the 3D topology relations and structures of driving-scene point clouds.
arXiv Detail & Related papers (2020-08-04T13:56:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.