Weather-Magician: Reconstruction and Rendering Framework for 4D Weather Synthesis In Real Time
- URL: http://arxiv.org/abs/2505.19919v1
- Date: Mon, 26 May 2025 12:44:53 GMT
- Title: Weather-Magician: Reconstruction and Rendering Framework for 4D Weather Synthesis In Real Time
- Authors: Chen Sang, Yeqiang Qian, Jiale Zhang, Chunxiang Wang, Ming Yang,
- Abstract summary: We propose a framework based on gaussian splatting to reconstruct real scenes and render them under synthesized 4D weather effects.<n>Our work supports continuous dynamic weather changes and can easily control the details of the effects.
- Score: 28.860317925222954
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For tasks such as urban digital twins, VR/AR/game scene design, or creating synthetic films, the traditional industrial approach often involves manually modeling scenes and using various rendering engines to complete the rendering process. This approach typically requires high labor costs and hardware demands, and can result in poor quality when replicating complex real-world scenes. A more efficient approach is to use data from captured real-world scenes, then apply reconstruction and rendering algorithms to quickly recreate the authentic scene. However, current algorithms are unable to effectively reconstruct and render real-world weather effects. To address this, we propose a framework based on gaussian splatting, that can reconstruct real scenes and render them under synthesized 4D weather effects. Our work can simulate various common weather effects by applying Gaussians modeling and rendering techniques. It supports continuous dynamic weather changes and can easily control the details of the effects. Additionally, our work has low hardware requirements and achieves real-time rendering performance. The result demos can be accessed on our project homepage: weathermagician.github.io
Related papers
- PromptVFX: Text-Driven Fields for Open-World 3D Gaussian Animation [49.91188543847175]
We reformulate 3D animation as a field prediction task and introduce a text-driven framework that infers a time-varying 4D flow field acting on 3D Gaussians.<n>By leveraging large language models (LLMs) and vision-language models (VLMs) for function generation, our approach interprets arbitrary prompts and instantly updates color, opacity, and positions of 3D Gaussians in real time.
arXiv Detail & Related papers (2025-06-01T17:22:59Z) - WeatherEdit: Controllable Weather Editing with 4D Gaussian Field [5.240297013713328]
We present WeatherEdit, a novel weather editing pipeline for generating realistic weather effects in 3D scenes.<n>Our approach is structured into two key components: weather background editing and weather particle construction.<n>Experiments on multiple driving datasets demonstrate that WeatherEdit can generate diverse weather effects with controllable condition severity.
arXiv Detail & Related papers (2025-05-26T19:10:47Z) - Controllable Weather Synthesis and Removal with Video Diffusion Models [61.56193902622901]
WeatherWeaver is a video diffusion model that synthesizes diverse weather effects directly into any input video.<n>Our model provides precise control over weather effect intensity and supports blending various weather types, ensuring both realism and adaptability.
arXiv Detail & Related papers (2025-05-01T17:59:57Z) - RainyGS: Efficient Rain Synthesis with Physically-Based Gaussian Splatting [28.60412760466588]
We introduce RainyGS, a novel approach to generate dynamic rain effects in open-world scenes with physical accuracy.<n>At the core of our method is the integration of physically-based raindrop and shallow water simulation techniques within the fast 3DGS rendering framework.<n>Our method supports synthesizing rain effects at over 30 fps, offering users flexible control over rain intensity.
arXiv Detail & Related papers (2025-03-27T12:35:03Z) - DiffusionRenderer: Neural Inverse and Forward Rendering with Video Diffusion Models [83.28670336340608]
We introduce DiffusionRenderer, a neural approach that addresses the dual problem of inverse and forward rendering.<n>Our model enables practical applications from a single video input--including relighting, material editing, and realistic object insertion.
arXiv Detail & Related papers (2025-01-30T18:59:11Z) - EvaSurf: Efficient View-Aware Implicit Textured Surface Reconstruction [53.28220984270622]
3D reconstruction methods should generate high-fidelity results with 3D consistency in real-time.<n>Our method can reconstruct high-quality appearance and accurate mesh on both synthetic and real-world datasets.<n>Our method can be trained in just 1-2 hours using a single GPU and run on mobile devices at over 40 FPS (Frames Per Second)
arXiv Detail & Related papers (2023-11-16T11:30:56Z) - Real-Time Neural Rasterization for Large Scenes [39.198327570559684]
We propose a new method for realistic real-time novel-view synthesis of large scenes.
Existing neural rendering methods generate realistic results, but primarily work for small scale scenes.
Our work is the first to enable real-time rendering of large real-world scenes.
arXiv Detail & Related papers (2023-11-09T18:59:10Z) - UE4-NeRF:Neural Radiance Field for Real-Time Rendering of Large-Scale
Scene [52.21184153832739]
We propose a novel neural rendering system called UE4-NeRF, specifically designed for real-time rendering of large-scale scenes.
Our approach combines with the Unrealization pipeline in Unreal Engine 4 (UE4), achieving real-time rendering of large-scale scenes at 4K resolution with a frame rate of up to 43 FPS.
arXiv Detail & Related papers (2023-10-20T04:01:35Z) - Efficient Meshy Neural Fields for Animatable Human Avatars [87.68529918184494]
Efficiently digitizing high-fidelity animatable human avatars from videos is a challenging and active research topic.
Recent rendering-based neural representations open a new way for human digitization with their friendly usability and photo-varying reconstruction quality.
We present EMA, a method that Efficiently learns Meshy neural fields to reconstruct animatable human Avatars.
arXiv Detail & Related papers (2023-03-23T00:15:34Z) - Neural Assets: Volumetric Object Capture and Rendering for Interactive
Environments [8.258451067861932]
We propose an approach for capturing real-world objects in everyday environments faithfully and fast.
We use a novel neural representation to reconstruct effects, such as translucent object parts, and preserve object appearance.
This leads to a seamless integration of the proposed neural assets with existing mesh environments and objects.
arXiv Detail & Related papers (2022-12-12T18:55:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.