Urban Space Insights Extraction using Acoustic Histogram Information
- URL: http://arxiv.org/abs/2012.05488v2
- Date: Mon, 14 Dec 2020 06:02:50 GMT
- Title: Urban Space Insights Extraction using Acoustic Histogram Information
- Authors: Nipun Wijerathne, Billy Pik Lik Lau, Benny Kai Kiat Ng, Chau Yuen
- Abstract summary: We study the implementation of low-cost analogue sound sensors to detect outdoor activities and estimate the raining period in an urban residential area.
The analogue sound sensors are transmitted to the cloud every 5 minutes in histogram format, which consists of sound data sampled every 100ms (10Hz)
- Score: 13.808053718325628
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Urban data mining can be identified as a highly potential area that can
enhance the smart city services towards better sustainable development
especially in the urban residential activity tracking. While existing human
activity tracking systems have demonstrated the capability to unveil the hidden
aspects of citizens' behavior, they often come with a high implementation cost
and require a large communication bandwidth. In this paper, we study the
implementation of low-cost analogue sound sensors to detect outdoor activities
and estimate the raining period in an urban residential area. The analogue
sound sensors are transmitted to the cloud every 5 minutes in histogram format,
which consists of sound data sampled every 100ms (10Hz). We then use wavelet
transformation (WT) and principal component analysis (PCA) to generate a more
robust and consistent feature set from the histogram. After that, we performed
unsupervised clustering and attempt to understand the individual
characteristics of each cluster to identify outdoor residential activities. In
addition, on-site validation has been conducted to show the effectiveness of
our approach.
Related papers
- Scaling Wearable Foundation Models [54.93979158708164]
We investigate the scaling properties of sensor foundation models across compute, data, and model size.
Using a dataset of up to 40 million hours of in-situ heart rate, heart rate variability, electrodermal activity, accelerometer, skin temperature, and altimeter per-minute data from over 165,000 people, we create LSM.
Our results establish the scaling laws of LSM for tasks such as imputation, extrapolation, both across time and sensor modalities.
arXiv Detail & Related papers (2024-10-17T15:08:21Z) - Identifying every building's function in large-scale urban areas with multi-modality remote-sensing data [5.18540804614798]
This study proposes a semi-supervised framework to identify every building's function in large-scale urban areas.
optical images, building height, and nighttime-light data are collected to describe the morphological attributes of buildings.
Results are evaluated by 20,000 validation points and statistical survey reports from the government.
arXiv Detail & Related papers (2024-05-08T15:32:20Z) - ActiveRIR: Active Audio-Visual Exploration for Acoustic Environment Modeling [57.1025908604556]
An environment acoustic model represents how sound is transformed by the physical characteristics of an indoor environment.
We propose active acoustic sampling, a new task for efficiently building an environment acoustic model of an unmapped environment.
We introduce ActiveRIR, a reinforcement learning policy that leverages information from audio-visual sensor streams to guide agent navigation and determine optimal acoustic data sampling positions.
arXiv Detail & Related papers (2024-04-24T21:30:01Z) - Unified Data Management and Comprehensive Performance Evaluation for
Urban Spatial-Temporal Prediction [Experiment, Analysis & Benchmark] [78.05103666987655]
This work addresses challenges in accessing and utilizing diverse urban spatial-temporal datasets.
We introduceatomic files, a unified storage format designed for urban spatial-temporal big data, and validate its effectiveness on 40 diverse datasets.
We conduct extensive experiments using diverse models and datasets, establishing a performance leaderboard and identifying promising research directions.
arXiv Detail & Related papers (2023-08-24T16:20:00Z) - Urban Rhapsody: Large-scale exploration of urban soundscapes [12.997538969557649]
Noise is one of the primary quality-of-life issues in urban environments.
Low-cost sensors can be deployed to monitor ambient noise levels at high temporal resolutions.
The amount of data they produce and the complexity of these data pose significant analytical challenges.
We propose Urban Rhapsody, a framework that combines state-of-the-art audio representation, machine learning, and visual analytics.
arXiv Detail & Related papers (2022-05-25T22:02:36Z) - Benchmarking high-fidelity pedestrian tracking systems for research,
real-time monitoring and crowd control [55.41644538483948]
High-fidelity pedestrian tracking in real-life conditions has been an important tool in fundamental crowd dynamics research.
As this technology advances, it is becoming increasingly useful also in society.
To successfully employ pedestrian tracking techniques in research and technology, it is crucial to validate and benchmark them for accuracy.
We present and discuss a benchmark suite, towards an open standard in the community, for privacy-respectful pedestrian tracking techniques.
arXiv Detail & Related papers (2021-08-26T11:45:26Z) - Spatio-temporal-spectral-angular observation model that integrates
observations from UAV and mobile mapping vehicle for better urban mapping [10.670246699899023]
In a complex urban scene, observation from a single sensor leads to voids in observations, failing to describe urban objects in a comprehensive manner.
We propose asource-spectral-angular observation model to integrate observations from UAV and mobile mapping vehicle, realizing a joint, coordinated observation from both air and ground.
arXiv Detail & Related papers (2021-08-24T02:58:12Z) - Detection, Tracking, and Counting Meets Drones in Crowds: A Benchmark [97.07865343576361]
We construct a benchmark with a new drone-captured largescale dataset, named as DroneCrowd.
We annotate 20,800 people trajectories with 4.8 million heads and several video-level attributes.
We design the Space-Time Neighbor-Aware Network (STNNet) as a strong baseline to solve object detection, tracking and counting jointly in dense crowds.
arXiv Detail & Related papers (2021-05-06T04:46:14Z) - Energy Aware Deep Reinforcement Learning Scheduling for Sensors
Correlated in Time and Space [62.39318039798564]
We propose a scheduling mechanism capable of taking advantage of correlated information.
The proposed mechanism is capable of determining the frequency with which sensors should transmit their updates.
We show that our solution can significantly extend the sensors' lifetime.
arXiv Detail & Related papers (2020-11-19T09:53:27Z) - Deep Learning for Surface Wave Identification in Distributed Acoustic
Sensing Data [1.7237878022600697]
We present a highly scalable and efficient approach to process real, complex DAS data.
Deep supervised learning is used to identify "useful" coherent surface waves generated by anthropogenic activity.
Our method provides interpretable patterns describing the interaction of ground-based human activities with the buried sensors.
arXiv Detail & Related papers (2020-10-15T15:53:03Z) - Understanding Crowd Behaviors in a Social Event by Passive WiFi Sensing
and Data Mining [21.343209622186606]
We propose a comprehensive data analysis framework to extract three types of patterns related to crowd behaviors in a large social event.
First, trajectories of the mobile devices are extracted from probe requests to reveal the spatial patterns of the crowds' movement.
Next, k-means and k-shape clustering algorithms are applied to extract temporal patterns visiting the crowds by days and locations.
arXiv Detail & Related papers (2020-02-05T03:36:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.