Comprehensive Dataset for Urban Streetlight Analysis
- URL: http://arxiv.org/abs/2407.01117v1
- Date: Mon, 1 Jul 2024 09:26:30 GMT
- Title: Comprehensive Dataset for Urban Streetlight Analysis
- Authors: Eliza Femi Sherley S, Sanjay T, Shri Kaanth P, Jeffrey Samuel S,
- Abstract summary: This article includes a comprehensive collection of over 800 high-resolution streetlight images taken systematically from India's major streets.
Each image has been labelled and grouped into directories based on binary class labels, which indicate whether each streetlight is functional or not.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This article includes a comprehensive collection of over 800 high-resolution streetlight images taken systematically from India's major streets, primarily in the Chennai region. The images were methodically collected following standardized methods to assure uniformity and quality. Each image has been labelled and grouped into directories based on binary class labels, which indicate whether each streetlight is functional or not. This organized dataset is intended to make it easier to train and evaluate deep neural networks, allowing for the creation of pre-trained models that have robust feature representations. Such models have several potential uses, such as improving smart city surveillance systems, automating street infrastructure monitoring, and increasing urban management efficiency. The availability of this dataset is intended to inspire future research and development in computer vision and smart city technologies, supporting innovation and practical solutions to urban infrastructure concerns. The dataset can be accessed at https://github.com/Team16Project/Street-Light-Dataset/.
Related papers
- BuildingView: Constructing Urban Building Exteriors Databases with Street View Imagery and Multimodal Large Language Mode [1.0937094979510213]
Building Exteriors are increasingly important in urban analytics, driven by advancements in Street View Imagery and its integration with urban research.
We propose BuildingView, a novel approach that integrates high-resolution visual data from Google Street View with spatial information from OpenStreetMap via the Overpass API.
This research improves the accuracy of urban building exterior data, identifies key sustainability and design indicators, and develops a framework for their extraction and categorization.
arXiv Detail & Related papers (2024-09-29T03:00:16Z) - RoBus: A Multimodal Dataset for Controllable Road Networks and Building Layouts Generation [4.322143509436427]
We introduce a multimodal dataset with evaluation metrics for controllable generation of Road networks and Building layouts (RoBus)
RoBus is the first and largest open-source dataset in city generation so far.
We analyze the RoBus dataset statistically and validate the effectiveness against existing road networks and building layouts generation methods.
We design new baselines that incorporate urban characteristics, such as road orientation and building density, in the process of generating road networks and building layouts.
arXiv Detail & Related papers (2024-07-10T16:55:01Z) - MatrixCity: A Large-scale City Dataset for City-scale Neural Rendering
and Beyond [69.37319723095746]
We build a large-scale, comprehensive, and high-quality synthetic dataset for city-scale neural rendering researches.
We develop a pipeline to easily collect aerial and street city views, accompanied by ground-truth camera poses and a range of additional data modalities.
The resulting pilot dataset, MatrixCity, contains 67k aerial images and 452k street images from two city maps of total size $28km2$.
arXiv Detail & Related papers (2023-09-28T16:06:02Z) - Building3D: An Urban-Scale Dataset and Benchmarks for Learning Roof
Structures from Point Clouds [4.38301148531795]
Existing datasets for 3D modeling mainly focus on common objects such as furniture or cars.
We present a urban-scale dataset consisting of more than 160 thousands buildings along with corresponding point clouds, mesh and wire-frame models, covering 16 cities in Estonia about 998 Km2.
Experimental results indicate that Building3D has challenges of high intra-class variance, data imbalance and large-scale noises.
arXiv Detail & Related papers (2023-07-21T21:38:57Z) - OmniCity: Omnipotent City Understanding with Multi-level and Multi-view
Images [72.4144257192959]
The paper presents OmniCity, a new dataset for omnipotent city understanding from multi-level and multi-view images.
The dataset contains over 100K pixel-wise annotated images that are well-aligned and collected from 25K geo-locations in New York City.
With the new OmniCity dataset, we provide benchmarks for a variety of tasks including building footprint extraction, height estimation, and building plane/instance/fine-grained segmentation.
arXiv Detail & Related papers (2022-08-01T15:19:25Z) - A Dataset of Images of Public Streetlights with Operational Monitoring
using Computer Vision Techniques [56.838577982762956]
dataset consists of $sim350textrmk$ images, taken from 140 UMBRELLA nodes installed in the South Gloucestershire region in the UK.
The dataset can be used to train deep neural networks and generate pre-trained models for smart city CCTV applications, smart weather detection algorithms, or street infrastructure monitoring.
arXiv Detail & Related papers (2022-03-31T09:36:07Z) - SensatUrban: Learning Semantics from Urban-Scale Photogrammetric Point
Clouds [52.624157840253204]
We introduce SensatUrban, an urban-scale UAV photogrammetry point cloud dataset consisting of nearly three billion points collected from three UK cities, covering 7.6 km2.
Each point in the dataset has been labelled with fine-grained semantic annotations, resulting in a dataset that is three times the size of the previous existing largest photogrammetric point cloud dataset.
arXiv Detail & Related papers (2022-01-12T14:48:11Z) - GANs for Urban Design [0.0]
The topic investigated in this paper is the application of Generative Adversarial Networks to the design of an urban block.
The research presents a flexible model able to adapt to the morphological characteristics of a city.
arXiv Detail & Related papers (2021-05-04T19:50:24Z) - Lighting the Darkness in the Deep Learning Era [118.35081853500411]
Low-light image enhancement (LLIE) aims at improving the perception or interpretability of an image captured in an environment with poor illumination.
Recent advances in this area are dominated by deep learning-based solutions.
We provide a comprehensive survey to cover various aspects ranging from algorithm taxonomy to unsolved open issues.
arXiv Detail & Related papers (2021-04-21T19:12:19Z) - Stereo Matching by Self-supervision of Multiscopic Vision [65.38359887232025]
We propose a new self-supervised framework for stereo matching utilizing multiple images captured at aligned camera positions.
A cross photometric loss, an uncertainty-aware mutual-supervision loss, and a new smoothness loss are introduced to optimize the network.
Our model obtains better disparity maps than previous unsupervised methods on the KITTI dataset.
arXiv Detail & Related papers (2021-04-09T02:58:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.