CoRe-GS: Coarse-to-Refined Gaussian Splatting with Semantic Object Focus
- URL: http://arxiv.org/abs/2509.04859v2
- Date: Tue, 16 Sep 2025 08:36:06 GMT
- Title: CoRe-GS: Coarse-to-Refined Gaussian Splatting with Semantic Object Focus
- Authors: Hannah Schieber, Dominik Frischmann, Victor Schaack, Simon Boche, Angela Schoellig, Stefan Leutenegger, Daniel Roth,
- Abstract summary: CoRe-GS is a semantic POI-focused extension of Gaussian Splatting (GS)<n>CoRe-GS first produces a fast segmentation-ready GS representation and then selectively refines splats belonging to semantically relevant POIs detected during data acquisition.<n>This targeted refinement reduces training time to 25% compared to full semantic GS.
- Score: 11.571783968082343
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Mobile reconstruction has the potential to support time-critical tasks such as tele-guidance and disaster response, where operators must quickly gain an accurate understanding of the environment. Full high-fidelity scene reconstruction is computationally expensive and often unnecessary when only specific points of interest (POIs) matter for timely decision making. We address this challenge with CoRe-GS, a semantic POI-focused extension of Gaussian Splatting (GS). Instead of optimizing every scene element uniformly, CoRe-GS first produces a fast segmentation-ready GS representation and then selectively refines splats belonging to semantically relevant POIs detected during data acquisition. This targeted refinement reduces training time to 25\% compared to full semantic GS while improving novel view synthesis quality in the areas that matter most. We validate CoRe-GS on both real-world (SCRREAM) and synthetic (NeRDS 360) datasets, demonstrating that prioritizing POIs enables faster and higher-quality mobile reconstruction tailored to operational needs.
Related papers
- ConeGS: Error-Guided Densification Using Pixel Cones for Improved Reconstruction with Fewer Primitives [27.616880434926646]
3D Gaussian Splatting achieves state-of-the-art image quality and real-time performance in novel view synthesis.<n>Issue stems from cloning-based densification, which propagates Gaussians along existing geometry.<n>We present ConeGS, an image-space-informed densification framework that is independent of existing scene geometry state.
arXiv Detail & Related papers (2025-11-10T07:54:58Z) - Edge Collaborative Gaussian Splatting with Integrated Rendering and Communication [69.23838350582764]
We present edge collaborative (ECO-GS) where each user can switch between a small GS model to guarantee fidelity and a remote large GS model to guarantee fidelity.<n>We propose integrated and communication (IRAC) which jointly optimize low-cost rendering status and edge power allocation.
arXiv Detail & Related papers (2025-10-26T15:33:29Z) - ObjectGS: Object-aware Scene Reconstruction and Scene Understanding via Gaussian Splatting [54.92763171355442]
ObjectGS is an object-aware framework that unifies 3D scene reconstruction with semantic understanding.<n>We show through experiments that ObjectGS outperforms state-of-the-art methods on open-vocabulary and panoptic segmentation tasks.
arXiv Detail & Related papers (2025-07-21T10:06:23Z) - RGE-GS: Reward-Guided Expansive Driving Scene Reconstruction via Diffusion Priors [54.81109375939306]
RGE-GS is a novel expansive reconstruction framework that synergizes diffusion-based generation with reward-guided Gaussian integration.<n>We propose a reward network that learns to identify and prioritize consistently generated patterns prior to reconstruction phases.<n>During the reconstruction process, we devise a differentiated training strategy that automatically adjust Gaussian optimization progress according to scene converge metrics.
arXiv Detail & Related papers (2025-06-28T08:02:54Z) - CL-Splats: Continual Learning of Gaussian Splatting with Local Optimization [80.3004354618963]
This paper introduces CL-Splats, which incrementally updates 3D representations from sparse scene captures.<n> CL-Splats integrates a robust change-detection module that segments updated and static components within the scene.<n>Our experiments demonstrate that CL-Splats achieves efficient updates with improved reconstruction quality over the state-of-the-art.
arXiv Detail & Related papers (2025-06-26T09:32:37Z) - SplArt: Articulation Estimation and Part-Level Reconstruction with 3D Gaussian Splatting [15.098827709119087]
We introduce SplArt, a self-supervised, category-agnostic framework to reconstruct articulated objects and infer kinematics from two sets of posed RGB images.<n>SplArt exploits geometric self-supervision, effectively addressing challenging scenarios without requiring 3D annotations or category-specific priors.<n> Evaluations on established and newly proposed benchmarks, along with applications to real-world scenarios using a handheld RGB camera, demonstrate SplArt's state-of-the-art performance and real-world practicality.
arXiv Detail & Related papers (2025-06-04T05:53:16Z) - MetaGen Blended RAG: Unlocking Zero-Shot Precision for Specialized Domain Question-Answering [0.0]
We introduce 'MetaGen Blended RAG', a novel enterprise search approach that enhances semantic retrievers.<n>By leveraging key concepts, topics, and acronyms, our method creates metadata-enriched semantic indexes and boosted hybrid queries.<n>On the biomedical PubMedQA dataset, MetaGen Blended RAG achieves 82% retrieval accuracy and 77% RAG accuracy, surpassing all prior zero-shot RAG benchmarks.
arXiv Detail & Related papers (2025-05-23T17:18:45Z) - GS-LTS: 3D Gaussian Splatting-Based Adaptive Modeling for Long-Term Service Robots [33.19663755125912]
3D Gaussian Splatting (3DGS) has garnered significant attention in robotics for its explicit, high fidelity dense scene representation.<n>We propose GS-LTS (Gaussian Splatting for Long-Term Service), a 3DGS-based system enabling indoor robots to manage diverse tasks in dynamic environments over time.
arXiv Detail & Related papers (2025-03-22T11:26:47Z) - MGSR: 2D/3D Mutual-boosted Gaussian Splatting for High-fidelity Surface Reconstruction under Various Light Conditions [6.4367384921445545]
Novel view synthesis (NVS) and surface reconstruction (SR) are essential tasks in 3D Gaussian Splatting (3D-GS)<n>We propose MGSR, a 2D/3D Mutual-boosted Gaussian splatting for Surface Reconstruction that enhances both rendering quality and 3D reconstruction accuracy.<n>We evaluate MGSR on a diverse set of synthetic and real-world datasets, at both object and scene levels, demonstrating strong performance in rendering and surface reconstruction.
arXiv Detail & Related papers (2025-03-07T07:06:47Z) - Chain-of-Retrieval Augmented Generation [91.02950964802454]
This paper introduces an approach for training o1-like RAG models that retrieve and reason over relevant information step by step before generating the final answer.<n>Our proposed method, CoRAG, allows the model to dynamically reformulate the query based on the evolving state.
arXiv Detail & Related papers (2025-01-24T09:12:52Z) - T-3DGS: Removing Transient Objects for 3D Scene Reconstruction [83.05271859398779]
Transient objects in video sequences can significantly degrade the quality of 3D scene reconstructions.<n>We propose T-3DGS, a novel framework that robustly filters out transient distractors during 3D reconstruction using Gaussian Splatting.
arXiv Detail & Related papers (2024-11-29T07:45:24Z) - SpotlessSplats: Ignoring Distractors in 3D Gaussian Splatting [44.42317312908314]
3D Gaussian Splatting (3DGS) is a promising technique for 3D reconstruction, offering efficient training and rendering speeds.
Current methods require highly controlled environments to meet the inter-view consistency assumption of 3DGS.
We present SpotLessSplats, an approach that leverages pre-trained and general-purpose features coupled with robust optimization to effectively ignore transient distractors.
arXiv Detail & Related papers (2024-06-28T17:07:11Z) - Event3DGS: Event-Based 3D Gaussian Splatting for High-Speed Robot Egomotion [54.197343533492486]
Event3DGS can reconstruct high-fidelity 3D structure and appearance under high-speed egomotion.
Experiments on multiple synthetic and real-world datasets demonstrate the superiority of Event3DGS compared with existing event-based dense 3D scene reconstruction frameworks.
Our framework also allows one to incorporate a few motion-blurred frame-based measurements into the reconstruction process to further improve appearance fidelity without loss of structural accuracy.
arXiv Detail & Related papers (2024-06-05T06:06:03Z) - FSGS: Real-Time Few-shot View Synthesis using Gaussian Splatting [58.41056963451056]
We propose a few-shot view synthesis framework based on 3D Gaussian Splatting.
This framework enables real-time and photo-realistic view synthesis with as few as three training views.
FSGS achieves state-of-the-art performance in both accuracy and rendering efficiency across diverse datasets.
arXiv Detail & Related papers (2023-12-01T09:30:02Z) - Revisiting the Encoding of Satellite Image Time Series [2.5874041837241304]
Image Time Series (SITS)temporal learning is complex due to hightemporal resolutions and irregular acquisition times.
We develop a novel perspective of SITS processing as a direct set prediction problem, inspired by the recent trend in adopting query-based transformer decoders.
We attain new state-of-the-art (SOTA) results on the Satellite PASTIS benchmark dataset.
arXiv Detail & Related papers (2023-05-03T12:44:20Z) - Grounded Situation Recognition [56.18102368133022]
We introduce Grounded Situation Recognition (GSR), a task that requires producing structured semantic summaries of images.
GSR presents important technical challenges: identifying semantic saliency, categorizing and localizing a large and diverse set of entities.
We show initial findings on three exciting future directions enabled by our models: conditional querying, visual chaining, and grounded semantic aware image retrieval.
arXiv Detail & Related papers (2020-03-26T17:57:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.