Seed3D 1.0: From Images to High-Fidelity Simulation-Ready 3D Assets
- URL: http://arxiv.org/abs/2510.19944v1
- Date: Wed, 22 Oct 2025 18:16:32 GMT
- Title: Seed3D 1.0: From Images to High-Fidelity Simulation-Ready 3D Assets
- Authors: Jiashi Feng, Xiu Li, Jing Lin, Jiahang Liu, Gaohong Liu, Weiqiang Lou, Su Ma, Guang Shi, Qinlong Wang, Jun Wang, Zhongcong Xu, Xuanyu Yi, Zihao Yu, Jianfeng Zhang, Yifan Zhu, Rui Chen, Jinxin Chi, Zixian Du, Li Han, Lixin Huang, Kaihua Jiang, Yuhan Li, Guan Luo, Shuguang Wang, Qianyi Wu, Fan Yang, Junyang Zhang, Xuanmeng Zhang,
- Abstract summary: We present Seed3D 1.0, a foundation model that generates simulation-ready 3D assets from single images.<n>Unlike existing 3D generation models, our system produces assets with accurate geometry, well-aligned textures, and realistic physically-based materials.
- Score: 63.67760219308476
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Developing embodied AI agents requires scalable training environments that balance content diversity with physics accuracy. World simulators provide such environments but face distinct limitations: video-based methods generate diverse content but lack real-time physics feedback for interactive learning, while physics-based engines provide accurate dynamics but face scalability limitations from costly manual asset creation. We present Seed3D 1.0, a foundation model that generates simulation-ready 3D assets from single images, addressing the scalability challenge while maintaining physics rigor. Unlike existing 3D generation models, our system produces assets with accurate geometry, well-aligned textures, and realistic physically-based materials. These assets can be directly integrated into physics engines with minimal configuration, enabling deployment in robotic manipulation and simulation training. Beyond individual objects, the system scales to complete scene generation through assembling objects into coherent environments. By enabling scalable simulation-ready content creation, Seed3D 1.0 provides a foundation for advancing physics-based world simulators. Seed3D 1.0 is now available on https://console.volcengine.com/ark/region:ark+cn-beijing/experience/vision?modelId=doubao-seed3d-1-0-250928&tab=Gen3D
Related papers
- PAT3D: Physics-Augmented Text-to-3D Scene Generation [47.18949891825537]
PAT3D generates 3D objects, infers their spatial relations, and organizes them into a hierarchical scene tree.<n>A differentiable rigid-body simulator ensures realistic object interactions under gravity.<n>Experiments demonstrate that PAT3D substantially outperforms prior approaches in physical plausibility, semantic consistency, and visual quality.
arXiv Detail & Related papers (2025-11-26T23:23:58Z) - PhysX-Anything: Simulation-Ready Physical 3D Assets from Single Image [67.76547268461411]
PhysX-Anything is the first simulation-ready physical 3D generative framework.<n>It produces high-quality sim-ready 3D assets with explicit geometry, articulation, and physical attributes.<n>It reduces the number of tokens by 193x, enabling explicit geometry learning within standard VLM token budgets.
arXiv Detail & Related papers (2025-11-17T17:59:53Z) - PhysX-3D: Physical-Grounded 3D Asset Generation [48.78065667043986]
Existing 3D generation primarily emphasizes geometries and textures while neglecting physical-grounded modeling.<n>We present PhysXNet - the first physics-grounded 3D dataset systematically annotated across five foundational dimensions.<n>We also propose textbfPhysXGen, a feed-forward framework for physics-grounded image-to-3D asset generation.
arXiv Detail & Related papers (2025-07-16T17:59:35Z) - EmbodiedGen: Towards a Generative 3D World Engine for Embodied Intelligence [8.987157387248317]
EmbodiedGen is a foundational platform for interactive 3D world generation.<n>It enables the scalable generation of high-quality, controllable and photorealistic 3D assets at low cost.
arXiv Detail & Related papers (2025-06-12T11:43:50Z) - R3D2: Realistic 3D Asset Insertion via Diffusion for Autonomous Driving Simulation [78.26308457952636]
This paper introduces R3D2, a lightweight, one-step diffusion model designed to overcome limitations in autonomous driving simulation.<n>It enables realistic insertion of complete 3D assets into existing scenes by generating plausible rendering effects-such as shadows and consistent lighting-in real time.<n>We show that R3D2 significantly enhances the realism of inserted assets, enabling use-cases like text-to-3D asset insertion and cross-scene/dataset object transfer.
arXiv Detail & Related papers (2025-06-09T14:50:19Z) - PhysGen3D: Crafting a Miniature Interactive World from a Single Image [31.41059199853702]
PhysGen3D is a novel framework that transforms a single image into an amodal, camera-centric, interactive 3D scene.<n>At its core, PhysGen3D estimates 3D shapes, poses, physical and lighting properties of objects.<n>We evaluate PhysGen3D's performance against closed-source state-of-the-art (SOTA) image-to-video models, including Pika, Kling, and Gen-3.
arXiv Detail & Related papers (2025-03-26T17:31:04Z) - DreamPhysics: Learning Physics-Based 3D Dynamics with Video Diffusion Priors [75.83647027123119]
We propose to learn the physical properties of a material field with video diffusion priors.<n>We then utilize a physics-based Material-Point-Method simulator to generate 4D content with realistic motions.
arXiv Detail & Related papers (2024-06-03T16:05:25Z) - Atlas3D: Physically Constrained Self-Supporting Text-to-3D for Simulation and Fabrication [50.541882834405946]
We introduce Atlas3D, an automatic and easy-to-implement text-to-3D method.
Our approach combines a novel differentiable simulation-based loss function with physically inspired regularization.
We verify Atlas3D's efficacy through extensive generation tasks and validate the resulting 3D models in both simulated and real-world environments.
arXiv Detail & Related papers (2024-05-28T18:33:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.