Augmented Object Intelligence: Making the Analog World Interactable with XR-Objects
- URL: http://arxiv.org/abs/2404.13274v2
- Date: Tue, 23 Apr 2024 03:09:15 GMT
- Title: Augmented Object Intelligence: Making the Analog World Interactable with XR-Objects
- Authors: Mustafa Doga Dogan, Eric J. Gonzalez, Andrea Colaco, Karan Ahuja, Ruofei Du, Johnny Lee, Mar Gonzalez-Franco, David Kim,
- Abstract summary: This paper introduces Augmented Object Intelligence (AOI), a novel XR interaction paradigm designed to blur the lines between digital and physical.
We implement the AOI concept in the form of XR-Objects, an open-source prototype system.
This system enables analog objects to not only convey information but also to initiate digital actions, such as querying for details or executing tasks.
- Score: 18.574032913387573
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Seamless integration of physical objects as interactive digital entities remains a challenge for spatial computing. This paper introduces Augmented Object Intelligence (AOI), a novel XR interaction paradigm designed to blur the lines between digital and physical by equipping real-world objects with the ability to interact as if they were digital, where every object has the potential to serve as a portal to vast digital functionalities. Our approach utilizes object segmentation and classification, combined with the power of Multimodal Large Language Models (MLLMs), to facilitate these interactions. We implement the AOI concept in the form of XR-Objects, an open-source prototype system that provides a platform for users to engage with their physical environment in rich and contextually relevant ways. This system enables analog objects to not only convey information but also to initiate digital actions, such as querying for details or executing tasks. Our contributions are threefold: (1) we define the AOI concept and detail its advantages over traditional AI assistants, (2) detail the XR-Objects system's open-source design and implementation, and (3) show its versatility through a variety of use cases and a user study.
Related papers
- Weak-to-Strong 3D Object Detection with X-Ray Distillation [75.47580744933724]
We propose a versatile technique that seamlessly integrates into any existing framework for 3D Object Detection.
X-Ray Distillation with Object-Complete Frames is suitable for both supervised and semi-supervised settings.
Our proposed methods surpass state-of-the-art in semi-supervised learning by 1-1.5 mAP.
arXiv Detail & Related papers (2024-03-31T13:09:06Z) - Chat-3D v2: Bridging 3D Scene and Large Language Models with Object
Identifiers [62.232809030044116]
We introduce the use of object identifiers to freely reference objects during a conversation.
We propose a two-stage alignment method, which involves learning an attribute-aware token and a relation-aware token for each object.
Experiments conducted on traditional datasets like ScanQA, ScanRefer, and Nr3D/Sr3D showcase the effectiveness of our proposed method.
arXiv Detail & Related papers (2023-12-13T14:27:45Z) - Physical Reasoning and Object Planning for Household Embodied Agents [21.719773664308683]
We introduce the CommonSense Object Affordance Task (COAT), a novel framework designed to analyze reasoning capabilities in commonsense scenarios.
COAT offers insights into the complexities of practical decision-making in real-world environments.
Our contributions include insightful Object-Utility mappings addressing the first consideration and two extensive QA datasets.
arXiv Detail & Related papers (2023-11-22T18:32:03Z) - Towards a conceptual model for the FAIR Digital Object Framework [0.0]
The FAIR Digital Objects movement aims at an infrastructure where digital objects can be exposed and explored according to the FAIR principles.
The conceptual model covers aspects of digital objects that are relevant to the FAIR principles.
arXiv Detail & Related papers (2023-02-23T10:00:46Z) - Object Scene Representation Transformer [56.40544849442227]
We introduce Object Scene Representation Transformer (OSRT), a 3D-centric model in which individual object representations naturally emerge through novel view synthesis.
OSRT scales to significantly more complex scenes with larger diversity of objects and backgrounds than existing methods.
It is multiple orders of magnitude faster at compositional rendering thanks to its light field parametrization and the novel Slot Mixer decoder.
arXiv Detail & Related papers (2022-06-14T15:40:47Z) - Complex-Valued Autoencoders for Object Discovery [62.26260974933819]
We propose a distributed approach to object-centric representations: the Complex AutoEncoder.
We show that this simple and efficient approach achieves better reconstruction performance than an equivalent real-valued autoencoder on simple multi-object datasets.
We also show that it achieves competitive unsupervised object discovery performance to a SlotAttention model on two datasets, and manages to disentangle objects in a third dataset where SlotAttention fails - all while being 7-70 times faster to train.
arXiv Detail & Related papers (2022-04-05T09:25:28Z) - ObjectFolder: A Dataset of Objects with Implicit Visual, Auditory, and
Tactile Representations [52.226947570070784]
We present Object, a dataset of 100 objects that addresses both challenges with two key innovations.
First, Object encodes the visual, auditory, and tactile sensory data for all objects, enabling a number of multisensory object recognition tasks.
Second, Object employs a uniform, object-centric simulations, and implicit representation for each object's visual textures, tactile readings, and tactile readings, making the dataset flexible to use and easy to share.
arXiv Detail & Related papers (2021-09-16T14:00:59Z) - O2O-Afford: Annotation-Free Large-Scale Object-Object Affordance
Learning [24.9242853417825]
We propose a unified affordance learning framework to learn object-object interaction for various tasks.
We are able to conduct large-scale object-object affordance learning without the need for human annotations or demonstrations.
Experiments on large-scale synthetic data and real-world data prove the effectiveness of the proposed approach.
arXiv Detail & Related papers (2021-06-29T04:38:12Z) - Where2Act: From Pixels to Actions for Articulated 3D Objects [54.19638599501286]
We extract highly localized actionable information related to elementary actions such as pushing or pulling for articulated objects with movable parts.
We propose a learning-from-interaction framework with an online data sampling strategy that allows us to train the network in simulation.
Our learned models even transfer to real-world data.
arXiv Detail & Related papers (2021-01-07T18:56:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.