Surface impedance inference via neural fields and sparse acoustic data obtained by a compact array
- URL: http://arxiv.org/abs/2602.11425v1
- Date: Wed, 11 Feb 2026 22:56:46 GMT
- Title: Surface impedance inference via neural fields and sparse acoustic data obtained by a compact array
- Authors: Yuanxin Xia, Xinyan Li, Matteo CalafĂ , Allan P. Engsig-Karup, Cheol-Ho Jeong,
- Abstract summary: We propose a physics-informed neural field that reconstructs local, near-surface broadband sound fields from sparse pressure samples.<n>A parallel, multi-frequency architecture enables a broadband impedance retrieval within runtimes on the order of seconds to minutes.<n>Here, we show that this approach offers a robust means of characterizing emphin-situ boundary conditions for architectural and automotive acoustics.
- Score: 2.0687656230706155
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Standardized laboratory characterizations for absorbing materials rely on idealized sound field assumptions, which deviate largely from real-life conditions. Consequently, \emph{in-situ} acoustic characterization has become essential for accurate diagnosis and virtual prototyping. We propose a physics-informed neural field that reconstructs local, near-surface broadband sound fields from sparse pressure samples to directly infer complex surface impedance. A parallel, multi-frequency architecture enables a broadband impedance retrieval within runtimes on the order of seconds to minutes. To validate the method, we developed a compact microphone array with low hardware complexity. Numerical verifications and laboratory experiments demonstrate accurate impedance retrieval with a small number of sensors under realistic conditions. We further showcase the approach in a vehicle cabin to provide practical guidance on measurement locations that avoid strong interference. Here, we show that this approach offers a robust means of characterizing \emph{in-situ} boundary conditions for architectural and automotive acoustics.
Related papers
- Reciprocal Latent Fields for Precomputed Sound Propagation [0.6474760227870046]
We introduce Reciprocal Latent Fields (RLF), a memory-efficient framework for encoding and predicting acoustic parameters.<n>We show that RLF maintains replication quality while reducing the memory footprint by several orders of magnitude.
arXiv Detail & Related papers (2026-02-06T18:31:11Z) - Gaussian Process Regression of Steering Vectors With Physics-Aware Deep Composite Kernels for Augmented Listening [0.7778724782015985]
This paper investigates continuous representations of steering vectors over frequency and position of microphone and source for augmented listening.<n>We propose a physics-aware composite kernel that model the directional incoming waves and the subsequent scattering effect.
arXiv Detail & Related papers (2025-08-20T09:29:14Z) - Seismic Acoustic Impedance Inversion Framework Based on Conditional Latent Generative Diffusion Model [17.677087517318988]
We propose a novel seismic acoustic impedance inversion framework based on a conditional latent generative diffusion model.<n>We show that the proposed method achieves high inversion accuracy and strong generalization capability within only a few diffusion steps.
arXiv Detail & Related papers (2025-06-16T14:19:40Z) - EvMic: Event-based Non-contact sound recovery from effective spatial-temporal modeling [69.96729022219117]
When sound waves hit an object, they induce vibrations that produce high-frequency and subtle visual changes.<n>Recent advances in event camera hardware show good potential for its application in visual sound recovery.<n>We propose a novel pipeline for non-contact sound recovery, fully utilizing spatial-temporal information from the event stream.
arXiv Detail & Related papers (2025-04-03T08:51:17Z) - Real Acoustic Fields: An Audio-Visual Room Acoustics Dataset and Benchmark [65.79402756995084]
Real Acoustic Fields (RAF) is a new dataset that captures real acoustic room data from multiple modalities.
RAF is the first dataset to provide densely captured room acoustic data.
arXiv Detail & Related papers (2024-03-27T17:59:56Z) - Neural Acoustic Context Field: Rendering Realistic Room Impulse Response
With Neural Fields [61.07542274267568]
This letter proposes a novel Neural Acoustic Context Field approach, called NACF, to parameterize an audio scene.
Driven by the unique properties of RIR, we design a temporal correlation module and multi-scale energy decay criterion.
Experimental results show that NACF outperforms existing field-based methods by a notable margin.
arXiv Detail & Related papers (2023-09-27T19:50:50Z) - Deep Impulse Responses: Estimating and Parameterizing Filters with Deep
Networks [76.830358429947]
Impulse response estimation in high noise and in-the-wild settings is a challenging problem.
We propose a novel framework for parameterizing and estimating impulse responses based on recent advances in neural representation learning.
arXiv Detail & Related papers (2022-02-07T18:57:23Z) - Near field Acoustic Holography on arbitrary shapes using Convolutional
Neural Network [9.1673176404097]
Near-field Acoustic Holography is a well-known problem aimed at estimating the vibrational velocity field of a structure by means of acoustic measurements.
We propose a NAH technique based on Convolutional Neural Network (CNN)
We validate the proposed method by comparing the estimates with the synthesized ground truth and with a state-of-the-art technique.
arXiv Detail & Related papers (2021-03-31T09:41:11Z) - Cross-domain Adaptation with Discrepancy Minimization for
Text-independent Forensic Speaker Verification [61.54074498090374]
This study introduces a CRSS-Forensics audio dataset collected in multiple acoustic environments.
We pre-train a CNN-based network using the VoxCeleb data, followed by an approach which fine-tunes part of the high-level network layers with clean speech from CRSS-Forensics.
arXiv Detail & Related papers (2020-09-05T02:54:33Z) - Temporal-Spatial Neural Filter: Direction Informed End-to-End
Multi-channel Target Speech Separation [66.46123655365113]
Target speech separation refers to extracting the target speaker's speech from mixed signals.
Two main challenges are the complex acoustic environment and the real-time processing requirement.
We propose a temporal-spatial neural filter, which directly estimates the target speech waveform from multi-speaker mixture.
arXiv Detail & Related papers (2020-01-02T11:12:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.