Synthesizing Event-centric Knowledge Graphs of Daily Activities Using
Virtual Space
- URL: http://arxiv.org/abs/2307.16206v1
- Date: Sun, 30 Jul 2023 11:50:36 GMT
- Title: Synthesizing Event-centric Knowledge Graphs of Daily Activities Using
Virtual Space
- Authors: Shusaku Egami, Takanori Ugai, Mikiko Oono, Koji Kitamura, Ken Fukuda
- Abstract summary: This study proposes the VirtualHome2KG framework to generate synthetic KGs of daily life activities in virtual space.
This framework augments both the synthetic video data of daily activities and the contextual semantic data corresponding to the video contents.
Various applications that have conventionally been difficult to develop due to the insufficient availability of relevant data and semantic information can be developed.
- Score: 0.3324876873771104
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Artificial intelligence (AI) is expected to be embodied in software agents,
robots, and cyber-physical systems that can understand the various contextual
information of daily life in the home environment to support human behavior and
decision making in various situations. Scene graph and knowledge graph (KG)
construction technologies have attracted much attention for knowledge-based
embodied question answering meeting this expectation. However, collecting and
managing real data on daily activities under various experimental conditions in
a physical space are quite costly, and developing AI that understands the
intentions and contexts is difficult. In the future, data from both virtual
spaces, where conditions can be easily modified, and physical spaces, where
conditions are difficult to change, are expected to be combined to analyze
daily living activities. However, studies on the KG construction of daily
activities using virtual space and their application have yet to progress. The
potential and challenges must still be clarified to facilitate AI development
for human daily life. Thus, this study proposes the VirtualHome2KG framework to
generate synthetic KGs of daily life activities in virtual space. This
framework augments both the synthetic video data of daily activities and the
contextual semantic data corresponding to the video contents based on the
proposed event-centric schema and virtual space simulation results. Therefore,
context-aware data can be analyzed, and various applications that have
conventionally been difficult to develop due to the insufficient availability
of relevant data and semantic information can be developed. We also demonstrate
herein the utility and potential of the proposed VirtualHome2KG framework
through several use cases, including the analysis of daily activities by
querying, embedding, and clustering, and fall risk detection among ...
Related papers
- Multimodal Datasets and Benchmarks for Reasoning about Dynamic Spatio-Temporality in Everyday Environments [4.024850952459759]
Our dataset measures the extent to which a robot can understand human behavior and the environment in a home setting.
Preliminary experiments suggest our dataset is useful in measuring AI's comprehension of daily life.
arXiv Detail & Related papers (2024-08-21T05:27:55Z) - Synthetic Multimodal Dataset for Empowering Safety and Well-being in
Home Environments [1.747623282473278]
This paper presents a synthetic multimodaltemporal of daily activities that fuses video data from a 3D virtual space simulator with knowledge graphs.
The dataset is developed for the Knowledge Graph Reasoning Challenge Social Issues (KGRC4SI), which focuses on identifying and addressing hazardous situations in the home environment.
arXiv Detail & Related papers (2024-01-26T10:05:41Z) - Towards Ubiquitous Semantic Metaverse: Challenges, Approaches, and
Opportunities [68.03971716740823]
In recent years, ubiquitous semantic Metaverse has been studied to revolutionize immersive cyber-virtual experiences for augmented reality (AR) and virtual reality (VR) users.
This survey focuses on the representation and intelligence for the four fundamental system components in ubiquitous Metaverse.
arXiv Detail & Related papers (2023-07-13T11:14:46Z) - ArK: Augmented Reality with Knowledge Interactive Emergent Ability [115.72679420999535]
We develop an infinite agent that learns to transfer knowledge memory from general foundation models to novel domains.
The heart of our approach is an emerging mechanism, dubbed Augmented Reality with Knowledge Inference Interaction (ArK)
We show that our ArK approach, combined with large foundation models, significantly improves the quality of generated 2D/3D scenes.
arXiv Detail & Related papers (2023-05-01T17:57:01Z) - Sense The Physical, Walkthrough The Virtual, Manage The Metaverse: A
Data-centric Perspective [38.00882742808889]
In the Metaverse, the physical space and the virtual space co-exist, and interact simultaneously.
To allow users to process and manipulate information seamlessly between the real and digital spaces, novel technologies must be developed.
These include smart interfaces, new augmented realities, efficient storage and data management and dissemination techniques.
arXiv Detail & Related papers (2022-06-14T14:21:33Z) - Towards Everyday Virtual Reality through Eye Tracking [1.2691047660244335]
Eye tracking is an emerging technology that helps to assess human behaviors in a real time and non-intrusive way.
A significant scientific push towards everyday virtual reality has been completed with three main research contributions.
arXiv Detail & Related papers (2022-03-29T16:09:37Z) - Evaluating Continual Learning Algorithms by Generating 3D Virtual
Environments [66.83839051693695]
Continual learning refers to the ability of humans and animals to incrementally learn over time in a given environment.
We propose to leverage recent advances in 3D virtual environments in order to approach the automatic generation of potentially life-long dynamic scenes with photo-realistic appearance.
A novel element of this paper is that scenes are described in a parametric way, thus allowing the user to fully control the visual complexity of the input stream the agent perceives.
arXiv Detail & Related papers (2021-09-16T10:37:21Z) - BEHAVIOR: Benchmark for Everyday Household Activities in Virtual,
Interactive, and Ecological Environments [70.18430114842094]
We introduce BEHAVIOR, a benchmark for embodied AI with 100 activities in simulation.
These activities are designed to be realistic, diverse, and complex.
We include 500 human demonstrations in virtual reality (VR) to serve as the human ground truth.
arXiv Detail & Related papers (2021-08-06T23:36:23Z) - ThreeDWorld: A Platform for Interactive Multi-Modal Physical Simulation [75.0278287071591]
ThreeDWorld (TDW) is a platform for interactive multi-modal physical simulation.
TDW enables simulation of high-fidelity sensory data and physical interactions between mobile agents and objects in rich 3D environments.
We present initial experiments enabled by TDW in emerging research directions in computer vision, machine learning, and cognitive science.
arXiv Detail & Related papers (2020-07-09T17:33:27Z) - RoboTHOR: An Open Simulation-to-Real Embodied AI Platform [56.50243383294621]
We introduce RoboTHOR to democratize research in interactive and embodied visual AI.
We show there exists a significant gap between the performance of models trained in simulation when they are tested in both simulations and their carefully constructed physical analogs.
arXiv Detail & Related papers (2020-04-14T20:52:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.