Coral Model Generation from Single Images for Virtual Reality Applications
- URL: http://arxiv.org/abs/2409.02376v1
- Date: Wed, 4 Sep 2024 01:54:20 GMT
- Title: Coral Model Generation from Single Images for Virtual Reality Applications
- Authors: Jie Fu, Shun Fu, Mick Grierson,
- Abstract summary: This paper introduces a deep-learning framework that generates high-precision 3D coral models from a single image.
The project incorporates Explainable AI (XAI) to transform AI-generated models into interactive "artworks"
- Score: 22.18438294137604
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: With the rapid development of VR technology, the demand for high-quality 3D models is increasing. Traditional methods struggle with efficiency and quality in large-scale customization. This paper introduces a deep-learning framework that generates high-precision 3D coral models from a single image. Using the Coral dataset, the framework extracts geometric and texture features, performs 3D reconstruction, and optimizes design and material blending. Advanced optimization and polygon count control ensure shape accuracy, detail retention, and flexible output for various complexities, catering to high-quality rendering and real-time interaction needs.The project incorporates Explainable AI (XAI) to transform AI-generated models into interactive "artworks," best viewed in VR and XR. This enhances model interpretability and human-machine collaboration. Real-time feedback in VR interactions displays information like coral species and habitat, enriching user experience. The generated models surpass traditional methods in detail, visual quality, and efficiency. This research offers an intelligent approach to 3D content creation for VR, lowering production barriers, and promoting widespread VR applications. Additionally, integrating XAI provides new insights into AI-generated visual content and advances research in 3D vision interpretability.
Related papers
- Ascribe New Dimensions to Scientific Data Visualization with VR [1.9084093324993718]
This article introduces ASCRIBE-VR, a VR platform of Autonomous Solutions for Computational Research with Immersive Browsing & Exploration.
ASCRIBE-VR enables multimodal analysis, structural assessments, and immersive visualization, supporting scientific visualization of advanced datasets such as X-ray CT, Magnetic Resonance, and synthetic 3D imaging.
arXiv Detail & Related papers (2025-04-18T03:59:39Z) - Generative AI Framework for 3D Object Generation in Augmented Reality [0.0]
This thesis integrates state-of-the-art generative AI models for real-time creation of 3D objects in augmented reality (AR) environments.
The framework demonstrates applications across industries such as gaming, education, retail, and interior design.
A significant contribution is democratizing 3D model creation, making advanced AI tools accessible to a broader audience.
arXiv Detail & Related papers (2025-02-21T17:01:48Z) - TripoSG: High-Fidelity 3D Shape Synthesis using Large-Scale Rectified Flow Models [69.0220314849478]
TripoSG is a new streamlined shape diffusion paradigm capable of generating high-fidelity 3D meshes with precise correspondence to input images.
The resulting 3D shapes exhibit enhanced detail due to high-resolution capabilities and demonstrate exceptional fidelity to input images.
To foster progress and innovation in the field of 3D generation, we will make our model publicly available.
arXiv Detail & Related papers (2025-02-10T16:07:54Z) - AURORA: Automated Unleash of 3D Room Outlines for VR Applications [29.70242715879101]
AURORA is a novel method that leverages RGB-D images to automatically generate both purely virtual reality (VR) scenes and VR scenes combined with real-world elements.
AURORA integrates advanced techniques in image processing, segmentation, and 3D reconstruction to efficiently create realistic and detailed interior designs.
arXiv Detail & Related papers (2024-12-15T03:06:22Z) - ViewCrafter: Taming Video Diffusion Models for High-fidelity Novel View Synthesis [63.169364481672915]
We propose textbfViewCrafter, a novel method for synthesizing high-fidelity novel views of generic scenes from single or sparse images.
Our method takes advantage of the powerful generation capabilities of video diffusion model and the coarse 3D clues offered by point-based representation to generate high-quality video frames.
arXiv Detail & Related papers (2024-09-03T16:53:19Z) - Pushing Auto-regressive Models for 3D Shape Generation at Capacity and Scalability [118.26563926533517]
Auto-regressive models have achieved impressive results in 2D image generation by modeling joint distributions in grid space.
We extend auto-regressive models to 3D domains, and seek a stronger ability of 3D shape generation by improving auto-regressive models at capacity and scalability simultaneously.
arXiv Detail & Related papers (2024-02-19T15:33:09Z) - VR-GS: A Physical Dynamics-Aware Interactive Gaussian Splatting System in Virtual Reality [39.53150683721031]
Our proposed VR-GS system represents a leap forward in human-centered 3D content interaction.
The components of our Virtual Reality system are designed for high efficiency and effectiveness.
arXiv Detail & Related papers (2024-01-30T01:28:36Z) - En3D: An Enhanced Generative Model for Sculpting 3D Humans from 2D
Synthetic Data [36.51674664590734]
We present En3D, an enhanced izable scheme for high-qualityd 3D human avatars.
Unlike previous works that rely on scarce 3D datasets or limited 2D collections with imbalance viewing angles and pose priors, our approach aims to develop a zero-shot 3D capable of producing 3D humans.
arXiv Detail & Related papers (2024-01-02T12:06:31Z) - HyperDreamer: Hyper-Realistic 3D Content Generation and Editing from a
Single Image [94.11473240505534]
We introduce HyperDreamer, a tool for creating 3D content from a single image.
It is hyper-realistic enough for post-generation usage, as users cannot view, render and edit the resulting 3D content from a full range.
We demonstrate the effectiveness of HyperDreamer in modeling region-aware materials with high-resolution textures and enabling user-friendly editing.
arXiv Detail & Related papers (2023-12-07T18:58:09Z) - MuSHRoom: Multi-Sensor Hybrid Room Dataset for Joint 3D Reconstruction and Novel View Synthesis [26.710960922302124]
We propose a real-world Multi-Sensor Hybrid Room dataset (MuSHRoom)
Our dataset presents exciting challenges and requires state-of-the-art methods to be cost-effective, robust to noisy data and devices.
We benchmark several famous pipelines on our dataset for joint 3D mesh reconstruction and novel view synthesis.
arXiv Detail & Related papers (2023-11-05T21:46:12Z) - VR-NeRF: High-Fidelity Virtualized Walkable Spaces [55.51127858816994]
We present an end-to-end system for the high-fidelity capture, model reconstruction, and real-time rendering of walkable spaces in virtual reality using neural radiance fields.
arXiv Detail & Related papers (2023-11-05T02:03:14Z) - Conditional Generative Modeling for Images, 3D Animations, and Video [4.422441608136163]
dissertation attempts to drive innovation in the field of generative modeling for computer vision.
Research focuses on architectures that offer transformations of noise and visual data, and the application of encoder-decoder architectures for generative tasks and 3D content manipulation.
arXiv Detail & Related papers (2023-10-19T21:10:39Z) - Deep3DSketch+: Rapid 3D Modeling from Single Free-hand Sketches [15.426513559370086]
We introduce a novel end-to-end approach, Deep3DSketch+, which performs 3D modeling using only a single free-hand sketch without inputting multiple sketches or view information.
Experiments demonstrated the effectiveness of our approach with the state-of-the-art (SOTA) performance on both synthetic and real datasets.
arXiv Detail & Related papers (2023-09-22T17:12:13Z) - Pushing the Limits of 3D Shape Generation at Scale [65.24420181727615]
We present a significant breakthrough in 3D shape generation by scaling it to unprecedented dimensions.
We have developed a model with an astounding 3.6 billion trainable parameters, establishing it as the largest 3D shape generation model to date, named Argus-3D.
arXiv Detail & Related papers (2023-06-20T13:01:19Z) - Simple and Effective Synthesis of Indoor 3D Scenes [78.95697556834536]
We study the problem of immersive 3D indoor scenes from one or more images.
Our aim is to generate high-resolution images and videos from novel viewpoints.
We propose an image-to-image GAN that maps directly from reprojections of incomplete point clouds to full high-resolution RGB-D images.
arXiv Detail & Related papers (2022-04-06T17:54:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.