Large Images are Gaussians: High-Quality Large Image Representation with Levels of 2D Gaussian Splatting
- URL: http://arxiv.org/abs/2502.09039v1
- Date: Thu, 13 Feb 2025 07:48:56 GMT
- Title: Large Images are Gaussians: High-Quality Large Image Representation with Levels of 2D Gaussian Splatting
- Authors: Lingting Zhu, Guying Lin, Jinnan Chen, Xinjie Zhang, Zhenchao Jin, Zhao Wang, Lequan Yu,
- Abstract summary: We present textbfLarge textbfImages are textbfGaussians (textbfLIG), which delves deeper into the application of 2DGS for image representations.
- Score: 21.629316414488027
- License:
- Abstract: While Implicit Neural Representations (INRs) have demonstrated significant success in image representation, they are often hindered by large training memory and slow decoding speed. Recently, Gaussian Splatting (GS) has emerged as a promising solution in 3D reconstruction due to its high-quality novel view synthesis and rapid rendering capabilities, positioning it as a valuable tool for a broad spectrum of applications. In particular, a GS-based representation, 2DGS, has shown potential for image fitting. In our work, we present \textbf{L}arge \textbf{I}mages are \textbf{G}aussians (\textbf{LIG}), which delves deeper into the application of 2DGS for image representations, addressing the challenge of fitting large images with 2DGS in the situation of numerous Gaussian points, through two distinct modifications: 1) we adopt a variant of representation and optimization strategy, facilitating the fitting of a large number of Gaussian points; 2) we propose a Level-of-Gaussian approach for reconstructing both coarse low-frequency initialization and fine high-frequency details. Consequently, we successfully represent large images as Gaussian points and achieve high-quality large image representation, demonstrating its efficacy across various types of large images. Code is available at {\href{https://github.com/HKU-MedAI/LIG}{https://github.com/HKU-MedAI/LIG}}.
Related papers
- GaussianToken: An Effective Image Tokenizer with 2D Gaussian Splatting [64.84383010238908]
We propose an effective image tokenizer with 2D Gaussian Splatting as a solution.
In general, our framework integrates the local influence of 2D Gaussian distribution into the discrete space.
Competitive reconstruction performances on CIFAR, Mini-Net, and ImageNet-1K demonstrate the effectiveness of our framework.
arXiv Detail & Related papers (2025-01-26T17:56:11Z) - Generalized and Efficient 2D Gaussian Splatting for Arbitrary-scale Super-Resolution [10.074968164380314]
Implicit Neural Representation (INR) has been successfully employed for Arbitrary-scale Super-Resolution (ASR)
It is computationally expensive to query numerous times to render each pixel.
Recently, Gaussian Splatting (GS) has shown its advantages over INR in both visual quality and rendering speed in 3D tasks.
arXiv Detail & Related papers (2025-01-12T15:14:58Z) - SmileSplat: Generalizable Gaussian Splats for Unconstrained Sparse Images [91.28365943547703]
A novel generalizable Gaussian Splatting method, SmileSplat, is proposed to reconstruct pixel-aligned Gaussian surfels for diverse scenarios.
The proposed method achieves state-of-the-art performance in various 3D vision tasks.
arXiv Detail & Related papers (2024-11-27T05:52:28Z) - PixelGaussian: Generalizable 3D Gaussian Reconstruction from Arbitrary Views [116.10577967146762]
PixelGaussian is an efficient framework for learning generalizable 3D Gaussian reconstruction from arbitrary views.
Our method achieves state-of-the-art performance with good generalization to various numbers of views.
arXiv Detail & Related papers (2024-10-24T17:59:58Z) - Compact 3D Gaussian Splatting for Static and Dynamic Radiance Fields [13.729716867839509]
We propose a learnable mask strategy that significantly reduces the number of Gaussians while preserving high performance.
In addition, we propose a compact but effective representation of view-dependent color by employing a grid-based neural field.
Our work provides a comprehensive framework for 3D scene representation, achieving high performance, fast training, compactness, and real-time rendering.
arXiv Detail & Related papers (2024-08-07T14:56:34Z) - Image-GS: Content-Adaptive Image Representation via 2D Gaussians [55.15950594752051]
We propose Image-GS, a content-adaptive image representation.
Using anisotropic 2D Gaussians as the basis, Image-GS shows high memory efficiency, supports fast random access, and offers a natural level of detail stack.
General efficiency and fidelity of Image-GS are validated against several recent neural image representations and industry-standard texture compressors.
We hope this research offers insights for developing new applications that require adaptive quality and resource control, such as machine perception, asset streaming, and content generation.
arXiv Detail & Related papers (2024-07-02T00:45:21Z) - GaussianObject: High-Quality 3D Object Reconstruction from Four Views with Gaussian Splatting [82.29476781526752]
Reconstructing and rendering 3D objects from highly sparse views is of critical importance for promoting applications of 3D vision techniques.
GaussianObject is a framework to represent and render the 3D object with Gaussian splatting that achieves high rendering quality with only 4 input images.
GaussianObject is evaluated on several challenging datasets, including MipNeRF360, OmniObject3D, OpenIllumination, and our-collected unposed images.
arXiv Detail & Related papers (2024-02-15T18:42:33Z) - GES: Generalized Exponential Splatting for Efficient Radiance Field Rendering [112.16239342037714]
GES (Generalized Exponential Splatting) is a novel representation that employs Generalized Exponential Function (GEF) to model 3D scenes.
With the aid of a frequency-modulated loss, GES achieves competitive performance in novel-view synthesis benchmarks.
arXiv Detail & Related papers (2024-02-15T17:32:50Z) - Compact 3D Gaussian Representation for Radiance Field [14.729871192785696]
We propose a learnable mask strategy to reduce the number of 3D Gaussian points without sacrificing performance.
We also propose a compact but effective representation of view-dependent color by employing a grid-based neural field.
Our work provides a comprehensive framework for 3D scene representation, achieving high performance, fast training, compactness, and real-time rendering.
arXiv Detail & Related papers (2023-11-22T20:31:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.