Quantum-Inspired Genetic Algorithm for Robust Source Separation in Smart City Acoustics
- URL: http://arxiv.org/abs/2504.07345v1
- Date: Thu, 10 Apr 2025 00:05:35 GMT
- Title: Quantum-Inspired Genetic Algorithm for Robust Source Separation in Smart City Acoustics
- Authors: Minh K. Quan, Mayuri Wijayasundara, Sujeeva Setunge, Pubudu N. Pathirana,
- Abstract summary: This paper introduces a novel Quantum-Inspired Genetic Algorithm (p-QIGA) for source separation.<n>p-QIGA draws inspiration from quantum information theory to enhance acoustic scene analysis in smart cities.<n> Experimental results show that the p-QIGA achieves accuracy comparable to state-of-the-art methods.
- Score: 1.3045945456375774
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The cacophony of urban sounds presents a significant challenge for smart city applications that rely on accurate acoustic scene analysis. Effectively analyzing these complex soundscapes, often characterized by overlapping sound sources, diverse acoustic events, and unpredictable noise levels, requires precise source separation. This task becomes more complicated when only limited training data is available. This paper introduces a novel Quantum-Inspired Genetic Algorithm (p-QIGA) for source separation, drawing inspiration from quantum information theory to enhance acoustic scene analysis in smart cities. By leveraging quantum superposition for efficient solution space exploration and entanglement to handle correlated sources, p-QIGA achieves robust separation even with limited data. These quantum-inspired concepts are integrated into a genetic algorithm framework to optimize source separation parameters. The effectiveness of our approach is demonstrated on two datasets: the TAU Urban Acoustic Scenes 2020 Mobile dataset, representing typical urban soundscapes, and the Silent Cities dataset, capturing quieter urban environments during the COVID-19 pandemic. Experimental results show that the p-QIGA achieves accuracy comparable to state-of-the-art methods while exhibiting superior resilience to noise and limited training data, achieving up to 8.2 dB signal-to-distortion ratio (SDR) in noisy environments and outperforming baseline methods by up to 2 dB with only 10% of the training data. This research highlights the potential of p-QIGA to advance acoustic signal processing in smart cities, particularly for noise pollution monitoring and acoustic surveillance.
Related papers
- A Lightweight and Real-Time Binaural Speech Enhancement Model with Spatial Cues Preservation [19.384404014248762]
Binaural speech enhancement aims to improve the speech quality and intelligibility of noisy signals received by hearing devices.<n>Existing methods often suffer from the compromise between noise reduction (NR) capacity and spatial cues ( SCP) accuracy and a high computational demand in complex acoustic scenes.<n>We present a learning-based lightweight complex convolutional network (LBCCN), which excels in NR by filtering low-frequency bands and keeping the rest.
arXiv Detail & Related papers (2024-09-19T03:52:50Z) - LoRaWAN Based Dynamic Noise Mapping with Machine Learning for Urban Noise Enforcement [8.010966370223985]
Static noise maps depicting long-term noise levels over wide areas are valuable urban planning assets for municipalities.
Non-traffic noise sources with transient behavior, which people complain frequently, are usually ignored by static maps.
We propose here a dynamic noise mapping approach using the data collected via low-power wide-area network (LPWAN) based internet of things (IoT) infrastructure.
arXiv Detail & Related papers (2024-07-30T21:40:12Z) - AV-GS: Learning Material and Geometry Aware Priors for Novel View Acoustic Synthesis [62.33446681243413]
view acoustic synthesis aims to render audio at any target viewpoint, given a mono audio emitted by a sound source at a 3D scene.<n>Existing methods have proposed NeRF-based implicit models to exploit visual cues as a condition for synthesizing audio.<n>We propose a novel Audio-Visual Gaussian Splatting (AV-GS) model to characterize the entire scene environment.<n>Experiments validate the superiority of our AV-GS over existing alternatives on the real-world RWAS and simulation-based SoundSpaces datasets.
arXiv Detail & Related papers (2024-06-13T08:34:12Z) - Real Acoustic Fields: An Audio-Visual Room Acoustics Dataset and Benchmark [65.79402756995084]
Real Acoustic Fields (RAF) is a new dataset that captures real acoustic room data from multiple modalities.
RAF is the first dataset to provide densely captured room acoustic data.
arXiv Detail & Related papers (2024-03-27T17:59:56Z) - Combating Bilateral Edge Noise for Robust Link Prediction [56.43882298843564]
We propose an information-theory-guided principle, Robust Graph Information Bottleneck (RGIB), to extract reliable supervision signals and avoid representation collapse.
Two instantiations, RGIB-SSL and RGIB-REP, are explored to leverage the merits of different methodologies.
Experiments on six datasets and three GNNs with diverse noisy scenarios verify the effectiveness of our RGIB instantiations.
arXiv Detail & Related papers (2023-11-02T12:47:49Z) - Neural Acoustic Context Field: Rendering Realistic Room Impulse Response
With Neural Fields [61.07542274267568]
This letter proposes a novel Neural Acoustic Context Field approach, called NACF, to parameterize an audio scene.
Driven by the unique properties of RIR, we design a temporal correlation module and multi-scale energy decay criterion.
Experimental results show that NACF outperforms existing field-based methods by a notable margin.
arXiv Detail & Related papers (2023-09-27T19:50:50Z) - Self-Supervised Visual Acoustic Matching [63.492168778869726]
Acoustic matching aims to re-synthesize an audio clip to sound as if it were recorded in a target acoustic environment.
We propose a self-supervised approach to visual acoustic matching where training samples include only the target scene image and audio.
Our approach jointly learns to disentangle room acoustics and re-synthesize audio into the target environment, via a conditional GAN framework and a novel metric.
arXiv Detail & Related papers (2023-07-27T17:59:59Z) - Realistic Noise Synthesis with Diffusion Models [44.404059914652194]
Deep denoising models require extensive real-world training data, which is challenging to acquire.<n>We propose a novel Realistic Noise Synthesis Diffusor (RNSD) method using diffusion models to address these challenges.
arXiv Detail & Related papers (2023-05-23T12:56:01Z) - Exploring the Noise Resilience of Successor Features and Predecessor
Features Algorithms in One and Two-Dimensional Environments [0.0]
This study delves into the dynamics of Successor Feature (SF) and Predecessor Feature (PF) algorithms within noisy environments.
SF exhibited superior adaptability, maintaining robust performance across varying noise levels.
This research contributes to the bridging discourse between computational neuroscience and reinforcement learning.
arXiv Detail & Related papers (2023-04-14T02:06:22Z) - Improve Noise Tolerance of Robust Loss via Noise-Awareness [60.34670515595074]
We propose a meta-learning method which is capable of adaptively learning a hyper parameter prediction function, called Noise-Aware-Robust-Loss-Adjuster (NARL-Adjuster for brevity)
Four SOTA robust loss functions are attempted to be integrated with our algorithm, and comprehensive experiments substantiate the general availability and effectiveness of the proposed method in both its noise tolerance and performance.
arXiv Detail & Related papers (2023-01-18T04:54:58Z) - Urban Rhapsody: Large-scale exploration of urban soundscapes [12.997538969557649]
Noise is one of the primary quality-of-life issues in urban environments.
Low-cost sensors can be deployed to monitor ambient noise levels at high temporal resolutions.
The amount of data they produce and the complexity of these data pose significant analytical challenges.
We propose Urban Rhapsody, a framework that combines state-of-the-art audio representation, machine learning, and visual analytics.
arXiv Detail & Related papers (2022-05-25T22:02:36Z) - Using deep learning to understand and mitigate the qubit noise
environment [0.0]
We propose to address the challenge of extracting accurate noise spectra from time-dynamics measurements on qubits.
We demonstrate a neural network based methodology that allows for extraction of the noise spectrum associated with any qubit surrounded by an arbitrary bath.
Our results can be applied to a wide range of qubit platforms and provide a framework for improving qubit performance.
arXiv Detail & Related papers (2020-05-03T17:13:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.