Virtual Trial Room with Computer Vision and Machine Learning
- URL: http://arxiv.org/abs/2412.10710v2
- Date: Tue, 17 Dec 2024 13:41:32 GMT
- Title: Virtual Trial Room with Computer Vision and Machine Learning
- Authors: Tulashi Prasad Joshi, Amrendra Kumar Yadav, Arjun Chhetri, Suraj Agrahari, Umesh Kanta Ghimire,
- Abstract summary: Customers often hesitate to purchase wearable products due to lack of certainty regarding fit and suitability.
A platform called the Virtual Trial Room with Computer Vision and Machine Learning is designed which enables customers to easily check whether a product will fit and suit them or not.
An AI-generated 3D model of the human head was created from a single 2D image using the DECA model.
This 3D model was then superimposed with a custom-made 3D model of glass which is based on real-world measurements and fitted over the human head.
- Score: 0.0
- License:
- Abstract: Online shopping has revolutionized the retail industry, providing customers with convenience and accessibility. However, customers often hesitate to purchase wearable products such as watches, jewelry, glasses, shoes, and clothes due to the lack of certainty regarding fit and suitability. This leads to significant return rates, causing problems for both customers and vendors. To address this issue, a platform called the Virtual Trial Room with Computer Vision and Machine Learning is designed which enables customers to easily check whether a product will fit and suit them or not. To achieve this, an AI-generated 3D model of the human head was created from a single 2D image using the DECA model. This 3D model was then superimposed with a custom-made 3D model of glass which is based on real-world measurements and fitted over the human head. To replicate the real-world look and feel, the model was retouched with textures, lightness, and smoothness. Furthermore, a full-stack application was developed utilizing various fornt-end and back-end technologies. This application enables users to view 3D-generated results on the website, providing an immersive and interactive experience.
Related papers
- Scalable Cloud-Native Pipeline for Efficient 3D Model Reconstruction from Monocular Smartphone Images [9.61065600471628]
We present a novel cloud-native pipeline that can automatically reconstruct 3D models from monocular 2D images captured using a smartphone camera.
Our solution produces a reusable 3D model, with embedded materials and textures, exportable and customizable in any external software or 3D engine.
arXiv Detail & Related papers (2024-09-28T11:15:26Z) - Coral Model Generation from Single Images for Virtual Reality Applications [22.18438294137604]
This paper introduces a deep-learning framework that generates high-precision 3D coral models from a single image.
The project incorporates Explainable AI (XAI) to transform AI-generated models into interactive "artworks"
arXiv Detail & Related papers (2024-09-04T01:54:20Z) - Shaping Realities: Enhancing 3D Generative AI with Fabrication Constraints [36.65470465480772]
Generative AI tools are becoming more prevalent in 3D modeling, enabling users to manipulate or create new models with text or images as inputs.
These methods focus on the aesthetic quality of the 3D models, refining them to look similar to the prompts provided by the user.
When creating 3D models intended for fabrication, designers need to trade-off the aesthetic qualities of a 3D model with their intended physical properties.
arXiv Detail & Related papers (2024-04-15T21:22:57Z) - VRMM: A Volumetric Relightable Morphable Head Model [55.21098471673929]
We introduce the Volumetric Relightable Morphable Model (VRMM), a novel volumetric and parametric facial prior for 3D face modeling.
Our framework efficiently disentangles and encodes latent spaces of identity, expression, and lighting into low-dimensional representations.
We demonstrate the versatility and effectiveness of VRMM through various applications like avatar generation, facial reconstruction, and animation.
arXiv Detail & Related papers (2024-02-06T15:55:46Z) - Digital Modeling for Everyone: Exploring How Novices Approach
Voice-Based 3D Modeling [0.0]
We explore novice mental models in voice-based 3D modeling by conducting a high-fidelity Wizard of Oz study with 22 participants.
We conclude with design implications for voice assistants.
For example, they have to: deal with vague, incomplete and wrong commands; provide a set of straightforward commands to shape simple and composite objects; and offer different strategies to select 3D objects.
arXiv Detail & Related papers (2023-07-10T11:03:32Z) - GET3D: A Generative Model of High Quality 3D Textured Shapes Learned
from Images [72.15855070133425]
We introduce GET3D, a Generative model that directly generates Explicit Textured 3D meshes with complex topology, rich geometric details, and high-fidelity textures.
GET3D is able to generate high-quality 3D textured meshes, ranging from cars, chairs, animals, motorbikes and human characters to buildings.
arXiv Detail & Related papers (2022-09-22T17:16:19Z) - Towards 3D VR-Sketch to 3D Shape Retrieval [128.47604316459905]
We study the use of 3D sketches as an input modality and advocate a VR-scenario where retrieval is conducted.
As a first stab at this new 3D VR-sketch to 3D shape retrieval problem, we make four contributions.
arXiv Detail & Related papers (2022-09-20T22:04:31Z) - 3D Neural Scene Representations for Visuomotor Control [78.79583457239836]
We learn models for dynamic 3D scenes purely from 2D visual observations.
A dynamics model, constructed over the learned representation space, enables visuomotor control for challenging manipulation tasks.
arXiv Detail & Related papers (2021-07-08T17:49:37Z) - Unmasking Communication Partners: A Low-Cost AI Solution for Digitally
Removing Head-Mounted Displays in VR-Based Telepresence [62.997667081978825]
Face-to-face conversation in Virtual Reality (VR) is a challenge when participants wear head-mounted displays (HMD)
Past research has shown that high-fidelity face reconstruction with personal avatars in VR is possible under laboratory conditions with high-cost hardware.
We propose one of the first low-cost systems for this task which uses only open source, free software and affordable hardware.
arXiv Detail & Related papers (2020-11-06T23:17:12Z) - I-nteract 2.0: A Cyber-Physical System to Design 3D Models using Mixed
Reality Technologies and Deep Learning for Additive Manufacturing [2.7986973063309875]
I-nteract is a cyber-physical system that enables real-time interaction with both virtual and real artifacts to design 3D models for additive manufacturing.
This paper presents novel advances in the development of the interaction platform I-nteract to generate 3D models using both constructive solid geometry and artificial intelligence.
arXiv Detail & Related papers (2020-10-21T14:13:21Z) - Interactive Annotation of 3D Object Geometry using 2D Scribbles [84.51514043814066]
In this paper, we propose an interactive framework for annotating 3D object geometry from point cloud data and RGB imagery.
Our framework targets naive users without artistic or graphics expertise.
arXiv Detail & Related papers (2020-08-24T21:51:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.