I-nteract 2.0: A Cyber-Physical System to Design 3D Models using Mixed
Reality Technologies and Deep Learning for Additive Manufacturing
- URL: http://arxiv.org/abs/2010.11025v1
- Date: Wed, 21 Oct 2020 14:13:21 GMT
- Title: I-nteract 2.0: A Cyber-Physical System to Design 3D Models using Mixed
Reality Technologies and Deep Learning for Additive Manufacturing
- Authors: Ammar Malik, Hugo Lhachemi, and Robert Shorten
- Abstract summary: I-nteract is a cyber-physical system that enables real-time interaction with both virtual and real artifacts to design 3D models for additive manufacturing.
This paper presents novel advances in the development of the interaction platform I-nteract to generate 3D models using both constructive solid geometry and artificial intelligence.
- Score: 2.7986973063309875
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: I-nteract is a cyber-physical system that enables real-time interaction with
both virtual and real artifacts to design 3D models for additive manufacturing
by leveraging on mixed reality technologies. This paper presents novel advances
in the development of the interaction platform I-nteract to generate 3D models
using both constructive solid geometry and artificial intelligence. The system
also enables the user to adjust the dimensions of the 3D models with respect to
their physical workspace. The effectiveness of the system is demonstrated by
generating 3D models of furniture (e.g., chairs and tables) and fitting them
into the physical space in a mixed reality environment.
Related papers
- PhysPart: Physically Plausible Part Completion for Interactable Objects [28.91080122885566]
We tackle the problem of physically plausible part completion for interactable objects.
We propose a diffusion-based part generation model that utilizes geometric conditioning.
We also demonstrate our applications in 3D printing, robot manipulation, and sequential part generation.
arXiv Detail & Related papers (2024-08-25T04:56:09Z) - Haptic Repurposing with GenAI [5.424247121310253]
Mixed Reality aims to merge the digital and physical worlds to create immersive human-computer interactions.
This paper introduces Haptic Repurposing with GenAI, an innovative approach to enhance MR interactions by transforming any physical objects into adaptive haptic interfaces for AI-generated virtual assets.
arXiv Detail & Related papers (2024-06-11T13:06:28Z) - Atlas3D: Physically Constrained Self-Supporting Text-to-3D for Simulation and Fabrication [50.541882834405946]
We introduce Atlas3D, an automatic and easy-to-implement text-to-3D method.
Our approach combines a novel differentiable simulation-based loss function with physically inspired regularization.
We verify Atlas3D's efficacy through extensive generation tasks and validate the resulting 3D models in both simulated and real-world environments.
arXiv Detail & Related papers (2024-05-28T18:33:18Z) - Shaping Realities: Enhancing 3D Generative AI with Fabrication Constraints [36.65470465480772]
Generative AI tools are becoming more prevalent in 3D modeling, enabling users to manipulate or create new models with text or images as inputs.
These methods focus on the aesthetic quality of the 3D models, refining them to look similar to the prompts provided by the user.
When creating 3D models intended for fabrication, designers need to trade-off the aesthetic qualities of a 3D model with their intended physical properties.
arXiv Detail & Related papers (2024-04-15T21:22:57Z) - 3D-VLA: A 3D Vision-Language-Action Generative World Model [68.0388311799959]
Recent vision-language-action (VLA) models rely on 2D inputs, lacking integration with the broader realm of the 3D physical world.
We propose 3D-VLA by introducing a new family of embodied foundation models that seamlessly link 3D perception, reasoning, and action.
Our experiments on held-in datasets demonstrate that 3D-VLA significantly improves the reasoning, multimodal generation, and planning capabilities in embodied environments.
arXiv Detail & Related papers (2024-03-14T17:58:41Z) - En3D: An Enhanced Generative Model for Sculpting 3D Humans from 2D
Synthetic Data [36.51674664590734]
We present En3D, an enhanced izable scheme for high-qualityd 3D human avatars.
Unlike previous works that rely on scarce 3D datasets or limited 2D collections with imbalance viewing angles and pose priors, our approach aims to develop a zero-shot 3D capable of producing 3D humans.
arXiv Detail & Related papers (2024-01-02T12:06:31Z) - Pushing the Limits of 3D Shape Generation at Scale [65.24420181727615]
We present a significant breakthrough in 3D shape generation by scaling it to unprecedented dimensions.
We have developed a model with an astounding 3.6 billion trainable parameters, establishing it as the largest 3D shape generation model to date, named Argus-3D.
arXiv Detail & Related papers (2023-06-20T13:01:19Z) - GET3D: A Generative Model of High Quality 3D Textured Shapes Learned
from Images [72.15855070133425]
We introduce GET3D, a Generative model that directly generates Explicit Textured 3D meshes with complex topology, rich geometric details, and high-fidelity textures.
GET3D is able to generate high-quality 3D textured meshes, ranging from cars, chairs, animals, motorbikes and human characters to buildings.
arXiv Detail & Related papers (2022-09-22T17:16:19Z) - 3D Neural Scene Representations for Visuomotor Control [78.79583457239836]
We learn models for dynamic 3D scenes purely from 2D visual observations.
A dynamics model, constructed over the learned representation space, enables visuomotor control for challenging manipulation tasks.
arXiv Detail & Related papers (2021-07-08T17:49:37Z) - ThreeDWorld: A Platform for Interactive Multi-Modal Physical Simulation [75.0278287071591]
ThreeDWorld (TDW) is a platform for interactive multi-modal physical simulation.
TDW enables simulation of high-fidelity sensory data and physical interactions between mobile agents and objects in rich 3D environments.
We present initial experiments enabled by TDW in emerging research directions in computer vision, machine learning, and cognitive science.
arXiv Detail & Related papers (2020-07-09T17:33:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.