Environment Transfer for Distributed Systems
- URL: http://arxiv.org/abs/2101.01863v1
- Date: Wed, 6 Jan 2021 04:27:24 GMT
- Title: Environment Transfer for Distributed Systems
- Authors: Chunheng Jiang, Jae-wook Ahn, Nirmit Desai
- Abstract summary: We propose a method to extend a technique that has been used for transferring acoustic style textures between audio data.
The method transfers audio signatures between environments for distributed acoustic data augmentation.
This paper devises metrics to evaluate the generated acoustic data, based on classification accuracy and content preservation.
- Score: 5.8010446129208155
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Collecting sufficient amount of data that can represent various acoustic
environmental attributes is a critical problem for distributed acoustic machine
learning. Several audio data augmentation techniques have been introduced to
address this problem but they tend to remain in simple manipulation of existing
data and are insufficient to cover the variability of the environments. We
propose a method to extend a technique that has been used for transferring
acoustic style textures between audio data. The method transfers audio
signatures between environments for distributed acoustic data augmentation.
This paper devises metrics to evaluate the generated acoustic data, based on
classification accuracy and content preservation. A series of experiments were
conducted using UrbanSound8K dataset and the results show that the proposed
method generates better audio data with transferred environmental features
while preserving content features.
Related papers
- A Novel Score-CAM based Denoiser for Spectrographic Signature Extraction without Ground Truth [0.0]
This paper develops a novel Score-CAM based denoiser to extract an object's signature from noisy spectrographic data.
In particular, this paper proposes a novel generative adversarial network architecture for learning and producing spectrographic training data.
arXiv Detail & Related papers (2024-10-28T21:40:46Z) - Effective Noise-aware Data Simulation for Domain-adaptive Speech Enhancement Leveraging Dynamic Stochastic Perturbation [25.410770364140856]
Cross-domain speech enhancement (SE) is often faced with severe challenges due to the scarcity of noise and background information in an unseen target domain.
This study puts forward a novel data simulation method to address this issue, leveraging noise-extractive techniques and generative adversarial networks (GANs)
We introduce the notion of dynamic perturbation, which can inject controlled perturbations into the noise embeddings during inference.
arXiv Detail & Related papers (2024-09-03T02:29:01Z) - ActiveRIR: Active Audio-Visual Exploration for Acoustic Environment Modeling [57.1025908604556]
An environment acoustic model represents how sound is transformed by the physical characteristics of an indoor environment.
We propose active acoustic sampling, a new task for efficiently building an environment acoustic model of an unmapped environment.
We introduce ActiveRIR, a reinforcement learning policy that leverages information from audio-visual sensor streams to guide agent navigation and determine optimal acoustic data sampling positions.
arXiv Detail & Related papers (2024-04-24T21:30:01Z) - Real Acoustic Fields: An Audio-Visual Room Acoustics Dataset and Benchmark [65.79402756995084]
Real Acoustic Fields (RAF) is a new dataset that captures real acoustic room data from multiple modalities.
RAF is the first dataset to provide densely captured room acoustic data.
arXiv Detail & Related papers (2024-03-27T17:59:56Z) - Self-Supervised Visual Acoustic Matching [63.492168778869726]
Acoustic matching aims to re-synthesize an audio clip to sound as if it were recorded in a target acoustic environment.
We propose a self-supervised approach to visual acoustic matching where training samples include only the target scene image and audio.
Our approach jointly learns to disentangle room acoustics and re-synthesize audio into the target environment, via a conditional GAN framework and a novel metric.
arXiv Detail & Related papers (2023-07-27T17:59:59Z) - Discriminative Singular Spectrum Classifier with Applications on
Bioacoustic Signal Recognition [67.4171845020675]
We present a bioacoustic signal classifier equipped with a discriminative mechanism to extract useful features for analysis and classification efficiently.
Unlike current bioacoustic recognition methods, which are task-oriented, the proposed model relies on transforming the input signals into vector subspaces.
The validity of the proposed method is verified using three challenging bioacoustic datasets containing anuran, bee, and mosquito species.
arXiv Detail & Related papers (2021-03-18T11:01:21Z) - Ensemble of Discriminators for Domain Adaptation in Multiple Sound
Source 2D Localization [7.564344795030588]
This paper introduces an ensemble of discriminators that improves the accuracy of a domain adaptation technique for the localization of multiple sound sources.
Recording and labeling such datasets is very costly, especially because data needs to be diverse enough to cover different acoustic conditions.
arXiv Detail & Related papers (2020-12-10T09:17:29Z) - Cross-domain Adaptation with Discrepancy Minimization for
Text-independent Forensic Speaker Verification [61.54074498090374]
This study introduces a CRSS-Forensics audio dataset collected in multiple acoustic environments.
We pre-train a CNN-based network using the VoxCeleb data, followed by an approach which fine-tunes part of the high-level network layers with clean speech from CRSS-Forensics.
arXiv Detail & Related papers (2020-09-05T02:54:33Z) - Unsupervised Domain Adaptation for Acoustic Scene Classification Using
Band-Wise Statistics Matching [69.24460241328521]
Machine learning algorithms can be negatively affected by mismatches between training (source) and test (target) data distributions.
We propose an unsupervised domain adaptation method that consists of aligning the first- and second-order sample statistics of each frequency band of target-domain acoustic scenes to the ones of the source-domain training dataset.
We show that the proposed method outperforms the state-of-the-art unsupervised methods found in the literature in terms of both source- and target-domain classification accuracy.
arXiv Detail & Related papers (2020-04-30T23:56:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.