Sounding Bodies: Modeling 3D Spatial Sound of Humans Using Body Pose and
Audio
- URL: http://arxiv.org/abs/2311.06285v1
- Date: Wed, 1 Nov 2023 16:40:35 GMT
- Title: Sounding Bodies: Modeling 3D Spatial Sound of Humans Using Body Pose and
Audio
- Authors: Xudong Xu, Dejan Markovic, Jacob Sandakly, Todd Keebler, Steven Krenn,
Alexander Richard
- Abstract summary: We present a model that can generate accurate 3D spatial audio for full human bodies.
The system consumes, as input, audio signals from headset microphones and body pose.
We show that our model can produce accurate body-induced sound fields when trained with a suitable loss.
- Score: 50.39279046238891
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While 3D human body modeling has received much attention in computer vision,
modeling the acoustic equivalent, i.e. modeling 3D spatial audio produced by
body motion and speech, has fallen short in the community. To close this gap,
we present a model that can generate accurate 3D spatial audio for full human
bodies. The system consumes, as input, audio signals from headset microphones
and body pose, and produces, as output, a 3D sound field surrounding the
transmitter's body, from which spatial audio can be rendered at any arbitrary
position in the 3D space. We collect a first-of-its-kind multimodal dataset of
human bodies, recorded with multiple cameras and a spherical array of 345
microphones. In an empirical evaluation, we demonstrate that our model can
produce accurate body-induced sound fields when trained with a suitable loss.
Dataset and code are available online.
Related papers
- 3D Audio-Visual Segmentation [44.61476023587931]
Recognizing the sounding objects in scenes is a longstanding objective in embodied AI, with diverse applications in robotics and AR/VR/MR.
We propose a new approach, EchoSegnet, characterized by integrating the ready-to-use knowledge from pretrained 2D audio-visual foundation models.
Experiments demonstrate that EchoSegnet can effectively segment sounding objects in 3D space on our new benchmark, representing a significant advancement in the field of embodied AI.
arXiv Detail & Related papers (2024-11-04T16:30:14Z) - Modeling and Driving Human Body Soundfields through Acoustic Primitives [79.38642644610592]
We present a framework that allows for high-quality spatial audio generation, capable of rendering the full 3D soundfield generated by a human body.
We demonstrate that we can render the full acoustic scene at any point in 3D space efficiently and accurately.
Our acoustic primitives result in an order of magnitude smaller soundfield representations and overcome deficiencies in near-field rendering compared to previous approaches.
arXiv Detail & Related papers (2024-07-18T01:05:13Z) - Novel-View Acoustic Synthesis from 3D Reconstructed Rooms [17.72902700567848]
We investigate the benefit of combining blind audio recordings with 3D scene information for novel-view acoustic synthesis.
We identify the main challenges of novel-view acoustic synthesis as sound source localization, separation, and dereverberation.
We show that incorporating room impulse responses (RIRs) derived from 3D reconstructed rooms enables the same network to jointly tackle these tasks.
arXiv Detail & Related papers (2023-10-23T17:34:31Z) - Listen2Scene: Interactive material-aware binaural sound propagation for
reconstructed 3D scenes [69.03289331433874]
We present an end-to-end audio rendering approach (Listen2Scene) for virtual reality (VR) and augmented reality (AR) applications.
We propose a novel neural-network-based sound propagation method to generate acoustic effects for 3D models of real environments.
arXiv Detail & Related papers (2023-02-02T04:09:23Z) - AudioEar: Single-View Ear Reconstruction for Personalized Spatial Audio [44.460995595847606]
We propose to achieve personalized spatial audio by reconstructing 3D human ears with single-view images.
To fill the gap between the vision and acoustics community, we develop a pipeline to integrate the reconstructed ear mesh with an off-the-shelf 3D human body.
arXiv Detail & Related papers (2023-01-30T02:15:50Z) - SoundSpaces 2.0: A Simulation Platform for Visual-Acoustic Learning [127.1119359047849]
We introduce SoundSpaces 2.0, a platform for on-the-fly geometry-based audio rendering for 3D environments.
It generates highly realistic acoustics for arbitrary sounds captured from arbitrary microphone locations.
SoundSpaces 2.0 is publicly available to facilitate wider research for perceptual systems that can both see and hear.
arXiv Detail & Related papers (2022-06-16T17:17:44Z) - Learning Speech-driven 3D Conversational Gestures from Video [106.15628979352738]
We propose the first approach to automatically and jointly synthesize both the synchronous 3D conversational body and hand gestures.
Our algorithm uses a CNN architecture that leverages the inherent correlation between facial expression and hand gestures.
We also contribute a new way to create a large corpus of more than 33 hours of annotated body, hand, and face data from in-the-wild videos of talking people.
arXiv Detail & Related papers (2021-02-13T01:05:39Z) - S3: Neural Shape, Skeleton, and Skinning Fields for 3D Human Modeling [103.65625425020129]
We represent the pedestrian's shape, pose and skinning weights as neural implicit functions that are directly learned from data.
We demonstrate the effectiveness of our approach on various datasets and show that our reconstructions outperform existing state-of-the-art methods.
arXiv Detail & Related papers (2021-01-17T02:16:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.