Seam360GS: Seamless 360° Gaussian Splatting from Real-World Omnidirectional Images
- URL: http://arxiv.org/abs/2508.20080v1
- Date: Wed, 27 Aug 2025 17:46:46 GMT
- Title: Seam360GS: Seamless 360° Gaussian Splatting from Real-World Omnidirectional Images
- Authors: Changha Shin, Woong Oh Cho, Seon Joo Kim,
- Abstract summary: We introduce a novel calibration framework that incorporates a dual-fisheye camera model into the 3D Gaussian splatting pipeline.<n>Our approach not only simulates the realistic visual artifacts produced by dual-fisheye cameras but also enables the synthesis of seamlessly rendered 360-degree images.
- Score: 22.213607618728705
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 360-degree visual content is widely shared on platforms such as YouTube and plays a central role in virtual reality, robotics, and autonomous navigation. However, consumer-grade dual-fisheye systems consistently yield imperfect panoramas due to inherent lens separation and angular distortions. In this work, we introduce a novel calibration framework that incorporates a dual-fisheye camera model into the 3D Gaussian splatting pipeline. Our approach not only simulates the realistic visual artifacts produced by dual-fisheye cameras but also enables the synthesis of seamlessly rendered 360-degree images. By jointly optimizing 3D Gaussian parameters alongside calibration variables that emulate lens gaps and angular distortions, our framework transforms imperfect omnidirectional inputs into flawless novel view synthesis. Extensive evaluations on real-world datasets confirm that our method produces seamless renderings-even from imperfect images-and outperforms existing 360-degree rendering models.
Related papers
- OMEGA-Avatar: One-shot Modeling of 360° Gaussian Avatars [54.688420347927725]
OMEGA-Avatar is the first framework that simultaneously generates a generalizable, 360-complete, and animatable 3D Gaussian head from a single image.<n>We show that OMEGA-Avatar achieves state-of-the-art performance, significantly outperforming existing baselines in 360 full-head completeness.
arXiv Detail & Related papers (2026-02-12T08:16:38Z) - Physically Aware 360$^\circ$ View Generation from a Single Image using Disentangled Scene Embeddings [0.0]
We introduce Disentangled360, a 3D-aware technology that integrates the advantages of direction disentangled volume rendering with single-image 360 view synthesis.<n>Disentangled360 facilitates mixed-reality medical supervision, robotic perception, and immersive content creation.
arXiv Detail & Related papers (2025-12-11T05:20:24Z) - Dual-Projection Fusion for Accurate Upright Panorama Generation in Robotic Vision [9.05196155518077]
This study presents a dual-stream angle-aware generation network that jointly estimates camera inclination angles and reconstructs upright panoramic images.<n> Experiments on the SUN360 and M3D datasets demonstrate that our method outperforms existing approaches in both inclination estimation and upright panorama generation.
arXiv Detail & Related papers (2025-11-30T14:28:21Z) - 3D Gaussian Flats: Hybrid 2D/3D Photometric Scene Reconstruction [62.84879632157956]
We propose a novel hybrid 2D/3D representation that jointly optimize constrained planar (2D) Gaussians for modeling flat surfaces and freeform (3D) Gaussians for the rest of the scene.<n>Our end-to-end approach dynamically detects and refines planar regions, improving both visual fidelity and geometric accuracy.<n>It achieves state-of-the-art depth estimation on ScanNet++ and ScanNetv2, and excels at mesh extraction without overfitting to a specific camera model.
arXiv Detail & Related papers (2025-09-19T21:04:36Z) - DiffPortrait360: Consistent Portrait Diffusion for 360 View Synthesis [11.51144219543605]
We introduce a novel approach that generates fully consistent 360-degree head views.<n>By training on continuous view sequences and integrating a back reference image, our approach achieves robust, locally continuous view synthesis.<n>Our model can be used to produce high-quality neural radiance fields (NeRFs) for real-time, free-viewpoint rendering.
arXiv Detail & Related papers (2025-03-19T19:47:04Z) - IM360: Textured Mesh Reconstruction for Large-scale Indoor Mapping with 360$^\circ$ Cameras [53.53895891356167]
We present a novel 3D reconstruction pipeline for 360$circ$ cameras for 3D mapping and rendering of indoor environments.<n>Our approach (IM360) leverages the wide field of view of omnidirectional images and integrates the spherical camera model into every core component of the SfM pipeline.<n>We evaluate our pipeline on large-scale indoor scenes from the Matterport3D and Stanford2D3D datasets.
arXiv Detail & Related papers (2025-02-18T05:15:19Z) - SC-OmniGS: Self-Calibrating Omnidirectional Gaussian Splatting [29.489453234466982]
SC- OmniGS is a novel self-calibrating system for fast and accurate radiance field reconstruction using 360-degree images.<n>We introduce a differentiable omnidirectional camera model in order to rectify the distortion of real-world data for performance enhancement.
arXiv Detail & Related papers (2025-02-07T08:06:30Z) - Splatter-360: Generalizable 360$^{\circ}$ Gaussian Splatting for Wide-baseline Panoramic Images [52.48351378615057]
textitSplatter-360 is a novel end-to-end generalizable 3DGS framework to handle wide-baseline panoramic images.<n>We introduce a 3D-aware bi-projection encoder to mitigate the distortions inherent in panoramic images.<n>This enables robust 3D-aware feature representations and real-time rendering capabilities.
arXiv Detail & Related papers (2024-12-09T06:58:31Z) - Hybrid bundle-adjusting 3D Gaussians for view consistent rendering with pose optimization [2.8990883469500286]
We introduce a hybrid bundle-adjusting 3D Gaussians model that enables view-consistent rendering with pose optimization.
This model jointly extract image-based and neural 3D representations to simultaneously generate view-consistent images and camera poses within forward-facing scenes.
arXiv Detail & Related papers (2024-10-17T07:13:00Z) - DreamScene360: Unconstrained Text-to-3D Scene Generation with Panoramic Gaussian Splatting [56.101576795566324]
We present a text-to-3D 360$circ$ scene generation pipeline.
Our approach utilizes the generative power of a 2D diffusion model and prompt self-refinement.
Our method offers a globally consistent 3D scene within a 360$circ$ perspective.
arXiv Detail & Related papers (2024-04-10T10:46:59Z) - Close-up View synthesis by Interpolating Optical Flow [17.800430382213428]
The virtual viewpoint is perceived as a new technique in virtual navigation, as yet not supported due to the lack of depth information and obscure camera parameters.
We develop a bidirectional optical flow method to obtain any virtual viewpoint by proportional of optical flow.
With the ingenious application of the optical-flow-value, we achieve clear and visual-fidelity magnified results through lens stretching in any corner.
arXiv Detail & Related papers (2023-07-12T04:40:00Z) - Moving in a 360 World: Synthesizing Panoramic Parallaxes from a Single
Panorama [13.60790015417166]
We present Omnidirectional Neural Radiance Fields ( OmniNeRF), the first method to the application of parallax-enabled novel panoramic view synthesis.
We propose to augment the single RGB-D panorama by projecting back and forth between a 3D world and different 2D panoramic coordinates at different virtual camera positions.
As a result, the proposed OmniNeRF achieves convincing renderings of novel panoramic views that exhibit the parallax effect.
arXiv Detail & Related papers (2021-06-21T05:08:34Z) - Towards Realistic 3D Embedding via View Alignment [53.89445873577063]
This paper presents an innovative View Alignment GAN (VA-GAN) that composes new images by embedding 3D models into 2D background images realistically and automatically.
VA-GAN consists of a texture generator and a differential discriminator that are inter-connected and end-to-end trainable.
arXiv Detail & Related papers (2020-07-14T14:45:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.