Novel-View Acoustic Synthesis
- URL: http://arxiv.org/abs/2301.08730v3
- Date: Tue, 24 Oct 2023 20:19:51 GMT
- Title: Novel-View Acoustic Synthesis
- Authors: Changan Chen, Alexander Richard, Roman Shapovalov, Vamsi Krishna
Ithapu, Natalia Neverova, Kristen Grauman, Andrea Vedaldi
- Abstract summary: We introduce the novel-view acoustic synthesis (NVAS) task.
given the sight and sound observed at a source viewpoint, can we synthesize the sound of that scene from an unseen target viewpoint?
We propose a neural rendering approach: Visually-Guided Acoustic Synthesis (ViGAS) network that learns to synthesize the sound of an arbitrary point in space.
- Score: 140.1107768313269
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce the novel-view acoustic synthesis (NVAS) task: given the sight
and sound observed at a source viewpoint, can we synthesize the sound of that
scene from an unseen target viewpoint? We propose a neural rendering approach:
Visually-Guided Acoustic Synthesis (ViGAS) network that learns to synthesize
the sound of an arbitrary point in space by analyzing the input audio-visual
cues. To benchmark this task, we collect two first-of-their-kind large-scale
multi-view audio-visual datasets, one synthetic and one real. We show that our
model successfully reasons about the spatial cues and synthesizes faithful
audio on both datasets. To our knowledge, this work represents the very first
formulation, dataset, and approach to solve the novel-view acoustic synthesis
task, which has exciting potential applications ranging from AR/VR to art and
design. Unlocked by this work, we believe that the future of novel-view
synthesis is in multi-modal learning from videos.
Related papers
- AV-GS: Learning Material and Geometry Aware Priors for Novel View Acoustic Synthesis [62.33446681243413]
view acoustic synthesis aims to render audio at any target viewpoint, given a mono audio emitted by a sound source at a 3D scene.
Existing methods have proposed NeRF-based implicit models to exploit visual cues as a condition for synthesizing audio.
We propose a novel Audio-Visual Gaussian Splatting (AV-GS) model to characterize the entire scene environment.
Experiments validate the superiority of our AV-GS over existing alternatives on the real-world RWAS and simulation-based SoundSpaces datasets.
arXiv Detail & Related papers (2024-06-13T08:34:12Z) - ORES: Open-vocabulary Responsible Visual Synthesis [104.7572323359984]
We formalize a new task, Open-vocabulary Responsible Visual Synthesis (ORES), where the synthesis model is able to avoid forbidden visual concepts.
To address this problem, we present a Two-stage Intervention (TIN) framework.
By introducing 1) rewriting with learnable instruction through a large-scale language model (LLM) and 2) synthesizing with prompt intervention on a diffusion model, it can effectively synthesize images avoiding any concepts but following the user's query as much as possible.
arXiv Detail & Related papers (2023-08-26T06:47:34Z) - Diff-TTSG: Denoising probabilistic integrated speech and gesture
synthesis [19.35266496960533]
We present the first diffusion-based probabilistic model, called Diff-TTSG, that jointly learns to synthesise speech and gestures together.
We describe a set of careful uni- and multi-modal subjective tests for evaluating integrated speech and gesture synthesis systems.
arXiv Detail & Related papers (2023-06-15T18:02:49Z) - Novel View Synthesis of Humans using Differentiable Rendering [50.57718384229912]
We present a new approach for synthesizing novel views of people in new poses.
Our synthesis makes use of diffuse Gaussian primitives that represent the underlying skeletal structure of a human.
Rendering these primitives gives results in a high-dimensional latent image, which is then transformed into an RGB image by a decoder network.
arXiv Detail & Related papers (2023-03-28T10:48:33Z) - AV-NeRF: Learning Neural Fields for Real-World Audio-Visual Scene
Synthesis [61.07542274267568]
We study a new task -- real-world audio-visual scene synthesis -- and a first-of-its-kind NeRF-based approach for multimodal learning.
We propose an acoustic-aware audio generation module that integrates prior knowledge of audio propagation into NeRF.
We present a coordinate transformation module that expresses a view direction relative to the sound source, enabling the model to learn sound source-centric acoustic fields.
arXiv Detail & Related papers (2023-02-04T04:17:19Z) - Neural Synthesis of Footsteps Sound Effects with Generative Adversarial
Networks [14.78990136075145]
We present a first attempt at adopting neural synthesis for footstep sound effects.
Our architectures reached realism scores as high as recorded samples, showing encouraging results.
arXiv Detail & Related papers (2021-10-18T20:04:46Z) - NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis [78.5281048849446]
We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes.
Our algorithm represents a scene using a fully-connected (non-convolutional) deep network.
Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses.
arXiv Detail & Related papers (2020-03-19T17:57:23Z) - Unsupervised Learning of Audio Perception for Robotics Applications:
Learning to Project Data to T-SNE/UMAP space [2.8935588665357077]
This paper builds upon key ideas to build perception of touch sounds without access to any ground-truth data.
We show how we can leverage ideas from classical signal processing to get large amounts of data of any sound of interest with a high precision.
arXiv Detail & Related papers (2020-02-10T20:33:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.