A Dataset of Images of Public Streetlights with Operational Monitoring
using Computer Vision Techniques
- URL: http://arxiv.org/abs/2203.16915v2
- Date: Fri, 1 Apr 2022 14:30:59 GMT
- Title: A Dataset of Images of Public Streetlights with Operational Monitoring
using Computer Vision Techniques
- Authors: Ioannis Mavromatis and Aleksandar Stanoev and Pietro Carnelli and
Yichao Jin and Mahesh Sooriyabandara and Aftab Khan
- Abstract summary: dataset consists of $sim350textrmk$ images, taken from 140 UMBRELLA nodes installed in the South Gloucestershire region in the UK.
The dataset can be used to train deep neural networks and generate pre-trained models for smart city CCTV applications, smart weather detection algorithms, or street infrastructure monitoring.
- Score: 56.838577982762956
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A dataset of street light images is presented. Our dataset consists of
$\sim350\textrm{k}$ images, taken from 140 UMBRELLA nodes installed in the
South Gloucestershire region in the UK. Each UMBRELLA node is installed on the
pole of a lamppost and is equipped with a Raspberry Pi Camera Module v1 facing
upwards towards the sky and lamppost light bulb. Each node collects an image at
hourly intervals for 24h every day. The data collection spans for a period of
six months.
Each image taken is logged as a single entry in the dataset along with the
Global Positioning System (GPS) coordinates of the lamppost. All entries in the
dataset have been post-processed and labelled based on the operation of the
lamppost, i.e., whether the lamppost is switched ON or OFF. The dataset can be
used to train deep neural networks and generate pre-trained models providing
feature representations for smart city CCTV applications, smart weather
detection algorithms, or street infrastructure monitoring. The dataset can be
found at \url{https://doi.org/10.5281/zenodo.6046758}.
Related papers
- Comprehensive Dataset for Urban Streetlight Analysis [0.0]
This article includes a comprehensive collection of over 800 high-resolution streetlight images taken systematically from India's major streets.
Each image has been labelled and grouped into directories based on binary class labels, which indicate whether each streetlight is functional or not.
arXiv Detail & Related papers (2024-07-01T09:26:30Z) - GeoCLIP: Clip-Inspired Alignment between Locations and Images for
Effective Worldwide Geo-localization [61.10806364001535]
Worldwide Geo-localization aims to pinpoint the precise location of images taken anywhere on Earth.
Existing approaches divide the globe into discrete geographic cells, transforming the problem into a classification task.
We propose GeoCLIP, a novel CLIP-inspired Image-to-GPS retrieval approach that enforces alignment between the image and its corresponding GPS locations.
arXiv Detail & Related papers (2023-09-27T20:54:56Z) - The LuViRA Dataset: Synchronized Vision, Radio, and Audio Sensors for Indoor Localization [41.58739817444644]
The dataset includes color images, corresponding depth maps, inertial measurement unit (IMU) readings, channel response between a 5G massive multiple-input and multiple-output (MIMO) testbed and user equipment.
We synchronize these sensors to ensure that all data is recorded simultaneously.
The main aim of this dataset is to enable research on sensor fusion with the most commonly used sensors for localization tasks.
arXiv Detail & Related papers (2023-02-10T15:12:40Z) - HPointLoc: Point-based Indoor Place Recognition using Synthetic RGB-D
Images [58.720142291102135]
We present a novel dataset named as HPointLoc, specially designed for exploring capabilities of visual place recognition in indoor environment.
The dataset is based on the popular Habitat simulator, in which it is possible to generate indoor scenes using both own sensor data and open datasets.
arXiv Detail & Related papers (2022-12-30T12:20:56Z) - CVLNet: Cross-View Semantic Correspondence Learning for Video-based
Camera Localization [89.69214577915959]
This paper tackles the problem of Cross-view Video-based camera localization.
We propose estimating the query camera's relative displacement to a satellite image before similarity matching.
Experiments have demonstrated the effectiveness of video-based localization over single image-based localization.
arXiv Detail & Related papers (2022-08-07T07:35:17Z) - Danish Airs and Grounds: A Dataset for Aerial-to-Street-Level Place
Recognition and Localization [9.834635805575584]
We contribute with the emphDanish Airs and Grounds dataset, a large collection of street-level and aerial images targeting such cases.
The dataset is larger and more diverse than current publicly available data, including more than 50 km of road in urban, suburban and rural areas.
We propose a map-to-image re-localization pipeline, that first estimates a dense 3D reconstruction from the aerial images and then matches query street-level images to street-level renderings of the 3D model.
arXiv Detail & Related papers (2022-02-03T19:58:09Z) - A Machine-Learning-Ready Dataset Prepared from the Solar and
Heliospheric Observatory Mission [0.0]
We present a Python tool to generate a standard dataset from solar images.
Our tool works with all image products from both the Solar and Heliospheric Observatory (SoHO) and Solar Dynamics Observatory (SDO) missions.
arXiv Detail & Related papers (2021-08-04T21:23:29Z) - Spatially-Varying Outdoor Lighting Estimation from Intrinsics [66.04683041837784]
We present SOLID-Net, a neural network for spatially-varying outdoor lighting estimation.
We generate spatially-varying local lighting environment maps by combining global sky environment map with warped image information.
Experiments on both synthetic and real datasets show that SOLID-Net significantly outperforms previous methods.
arXiv Detail & Related papers (2021-04-09T02:28:54Z) - PixSet : An Opportunity for 3D Computer Vision to Go Beyond Point Clouds
With a Full-Waveform LiDAR Dataset [0.11726720776908521]
Leddar PixSet is a new publicly available dataset (dataset.leddartech.com) for autonomous driving research and development.
The PixSet dataset contains approximately 29k frames from 97 sequences recorded in high-density urban areas.
arXiv Detail & Related papers (2021-02-24T01:13:17Z) - Speak2Label: Using Domain Knowledge for Creating a Large Scale Driver
Gaze Zone Estimation Dataset [55.391532084304494]
Driver Gaze in the Wild dataset contains 586 recordings, captured during different times of the day including evenings.
Driver Gaze in the Wild dataset contains 338 subjects with an age range of 18-63 years.
arXiv Detail & Related papers (2020-04-13T14:47:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.