Sonic Interactions in Virtual Environments: the Egocentric Audio
Perspective of the Digital Twin
- URL: http://arxiv.org/abs/2204.09919v1
- Date: Thu, 21 Apr 2022 07:18:16 GMT
- Title: Sonic Interactions in Virtual Environments: the Egocentric Audio
Perspective of the Digital Twin
- Authors: Michele Geronazzo and Stefania Serafin
- Abstract summary: This chapter aims to transform studies related to sonic interactions in virtual environments into a research field equipped with the egocentric perspective of the auditory digital twin.
The guardian of such locus of agency is the auditory digital twin that fosters intra-actions between humans and technology.
- Score: 6.639956135839834
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The relationships between the listener, physical world and virtual
environment (VE) should not only inspire the design of natural multimodal
interfaces but should be discovered to make sense of the mediating action of VR
technologies. This chapter aims to transform an archipelago of studies related
to sonic interactions in virtual environments (SIVE) into a research field
equipped with a first theoretical framework with an inclusive vision of the
challenges to come: the egocentric perspective of the auditory digital twin. In
a VE with immersive audio technologies implemented, the role of VR simulations
must be enacted by a participatory exploration of sense-making in a network of
human and non-human agents, called actors. The guardian of such locus of agency
is the auditory digital twin that fosters intra-actions between humans and
technology, dynamically and fluidly redefining all those configurations that
are crucial for an immersive and coherent experience. The idea of entanglement
theory is here mainly declined in an egocentric-spatial perspective related to
emerging knowledge of the listener's perceptual capabilities. This is an
actively transformative relation with the digital twin potentials to create
movement, transparency, and provocative activities in VEs. The chapter contains
an original theoretical perspective complemented by several bibliographical
references and links to the other book chapters that have contributed
significantly to the proposal presented here.
Related papers
- Haptic Repurposing with GenAI [5.424247121310253]
Mixed Reality aims to merge the digital and physical worlds to create immersive human-computer interactions.
This paper introduces Haptic Repurposing with GenAI, an innovative approach to enhance MR interactions by transforming any physical objects into adaptive haptic interfaces for AI-generated virtual assets.
arXiv Detail & Related papers (2024-06-11T13:06:28Z) - Generative AI for Immersive Communication: The Next Frontier in Internet-of-Senses Through 6G [37.09299562399439]
The Internet of Senses (IoS) seeks to provide multi-sensory experiences, acknowledging that in our physical reality, our perception extends far beyond just sight and sound.
This article explores the existing technologies driving immersive multi-sensory media, delving into their capabilities and potential applications.
arXiv Detail & Related papers (2024-04-02T07:57:05Z) - On the Emergence of Symmetrical Reality [51.21203247240322]
We introduce the symmetrical reality framework, which offers a unified representation encompassing various forms of physical-virtual amalgamations.
We propose an instance of an AI-driven active assistance service that illustrates the potential applications of symmetrical reality.
arXiv Detail & Related papers (2024-01-26T16:09:39Z) - EgoGen: An Egocentric Synthetic Data Generator [53.32942235801499]
EgoGen is a new synthetic data generator that can produce accurate and rich ground-truth training data for egocentric perception tasks.
At the heart of EgoGen is a novel human motion synthesis model that directly leverages egocentric visual inputs of a virtual human to sense the 3D environment.
We demonstrate EgoGen's efficacy in three tasks: mapping and localization for head-mounted cameras, egocentric camera tracking, and human mesh recovery from egocentric views.
arXiv Detail & Related papers (2024-01-16T18:55:22Z) - Digital Life Project: Autonomous 3D Characters with Social Intelligence [86.2845109451914]
Digital Life Project is a framework utilizing language as the universal medium to build autonomous 3D characters.
Our framework comprises two primary components: SocioMind and MoMat-MoGen.
arXiv Detail & Related papers (2023-12-07T18:58:59Z) - The Seven Worlds and Experiences of the Wireless Metaverse: Challenges
and Opportunities [58.42198877478623]
Wireless metaverse will create diverse user experiences at the intersection of the physical, digital, and virtual worlds.
We present a holistic vision of a limitless, wireless metaverse that distills the metaverse into an intersection of seven worlds and experiences.
We highlight the need for end-to-end synchronization of DTs, and the role of human-level AI and reasoning abilities for cognitive avatars.
arXiv Detail & Related papers (2023-04-20T13:04:52Z) - Narrator: Towards Natural Control of Human-Scene Interaction Generation
via Relationship Reasoning [34.00107506891627]
We focus on naturally and controllably generating realistic and diverse HSIs from textual descriptions.
We propose Narrator, a novel relationship reasoning-based generative approach.
Our experiments and perceptual studies show that Narrator can controllably generate diverse interactions and significantly outperform existing works.
arXiv Detail & Related papers (2023-03-16T15:44:15Z) - Artificial Intelligence for the Metaverse: A Survey [66.57225253532748]
We first deliver a preliminary of AI, including machine learning algorithms and deep learning architectures, and its role in the metaverse.
We then convey a comprehensive investigation of AI-based methods concerning six technical aspects that have potentials for the metaverse.
Several AI-aided applications, such as healthcare, manufacturing, smart cities, and gaming, are studied to be deployed in the virtual worlds.
arXiv Detail & Related papers (2022-02-15T03:34:56Z) - ThreeDWorld: A Platform for Interactive Multi-Modal Physical Simulation [75.0278287071591]
ThreeDWorld (TDW) is a platform for interactive multi-modal physical simulation.
TDW enables simulation of high-fidelity sensory data and physical interactions between mobile agents and objects in rich 3D environments.
We present initial experiments enabled by TDW in emerging research directions in computer vision, machine learning, and cognitive science.
arXiv Detail & Related papers (2020-07-09T17:33:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.