Unsupervised Vehicle Counting via Multiple Camera Domain Adaptation
- URL: http://arxiv.org/abs/2004.09251v2
- Date: Sun, 13 Sep 2020 17:34:22 GMT
- Title: Unsupervised Vehicle Counting via Multiple Camera Domain Adaptation
- Authors: Luca Ciampi and Carlos Santiago and Joao Paulo Costeira and Claudio
Gennaro and Giuseppe Amato
- Abstract summary: Monitoring vehicle flows in cities is crucial to improve the urban environment and quality of life of citizens.
Current technologies for vehicle counting in images hinge on large quantities of annotated data, preventing their scalability to city-scale as new cameras are added to the system.
We propose and discuss a new methodology to design image-based vehicle density estimators with few labeled data via multiple camera domain adaptations.
- Score: 9.730985797769764
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Monitoring vehicle flows in cities is crucial to improve the urban
environment and quality of life of citizens. Images are the best sensing
modality to perceive and assess the flow of vehicles in large areas. Current
technologies for vehicle counting in images hinge on large quantities of
annotated data, preventing their scalability to city-scale as new cameras are
added to the system. This is a recurrent problem when dealing with physical
systems and a key research area in Machine Learning and AI. We propose and
discuss a new methodology to design image-based vehicle density estimators with
few labeled data via multiple camera domain adaptations.
Related papers
- Low-Light Image Enhancement Framework for Improved Object Detection in Fisheye Lens Datasets [4.170227455727819]
This study addresses the evolving challenges in urban traffic monitoring systems based on fisheye lens cameras.
Fisheye lenses provide wide and omnidirectional coverage in a single frame, making them a transformative solution.
Motivated by these challenges, this study proposes a novel approach that combines a ransformer-based image enhancement framework and ensemble learning technique.
arXiv Detail & Related papers (2024-04-15T18:32:52Z) - SKoPe3D: A Synthetic Dataset for Vehicle Keypoint Perception in 3D from
Traffic Monitoring Cameras [26.457695296042903]
We propose SKoPe3D, a unique synthetic vehicle keypoint dataset from a roadside perspective.
SKoPe3D contains over 150k vehicle instances and 4.9 million keypoints.
Our experiments highlight the dataset's applicability and the potential for knowledge transfer between synthetic and real-world data.
arXiv Detail & Related papers (2023-09-04T02:57:30Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Mixed Traffic Control and Coordination from Pixels [18.37701232116777]
Previous methods for traffic control have proven futile in alleviating current congestion levels.
This gives rise to mixed traffic control, where robot vehicles regulate human-driven vehicles through reinforcement learning (RL)
In this work, we show robot vehicles using image observations can achieve competitive performance to using precise information on environments.
arXiv Detail & Related papers (2023-02-17T22:40:07Z) - Street-View Image Generation from a Bird's-Eye View Layout [95.36869800896335]
Bird's-Eye View (BEV) Perception has received increasing attention in recent years.
Data-driven simulation for autonomous driving has been a focal point of recent research.
We propose BEVGen, a conditional generative model that synthesizes realistic and spatially consistent surrounding images.
arXiv Detail & Related papers (2023-01-11T18:39:34Z) - Scalable and Real-time Multi-Camera Vehicle Detection,
Re-Identification, and Tracking [58.95210121654722]
We propose a real-time city-scale multi-camera vehicle tracking system that handles real-world, low-resolution CCTV instead of idealized and curated video streams.
Our method is ranked among the top five performers on the public leaderboard.
arXiv Detail & Related papers (2022-04-15T12:47:01Z) - Turning Traffic Monitoring Cameras into Intelligent Sensors for Traffic
Density Estimation [9.096163152559054]
This paper proposes a framework for estimating traffic density using uncalibrated traffic monitoring cameras with 4L characteristics.
The proposed framework consists of two major components: camera calibration and vehicle detection.
The results show that the Mean Absolute Error (MAE) in camera calibration is less than 0.2 meters out of 6 meters, and the accuracy of vehicle detection under various conditions is approximately 90%.
arXiv Detail & Related papers (2021-10-29T15:39:06Z) - Evaluating Computer Vision Techniques for Urban Mobility on Large-Scale,
Unconstrained Roads [25.29906312974705]
This paper proposes a simple mobile imaging setup to address several common problems in road safety at scale.
We use recent computer vision techniques to identify possible irregularities on roads.
We also demonstrate the mobile imaging solution's applicability to spot traffic violations.
arXiv Detail & Related papers (2021-09-11T09:07:56Z) - An Experimental Urban Case Study with Various Data Sources and a Model
for Traffic Estimation [65.28133251370055]
We organize an experimental campaign with video measurement in an area within the urban network of Zurich, Switzerland.
We focus on capturing the traffic state in terms of traffic flow and travel times by ensuring measurements from established thermal cameras.
We propose a simple yet efficient Multiple Linear Regression (MLR) model to estimate travel times with fusion of various data sources.
arXiv Detail & Related papers (2021-08-02T08:13:57Z) - SODA10M: Towards Large-Scale Object Detection Benchmark for Autonomous
Driving [94.11868795445798]
We release a Large-Scale Object Detection benchmark for Autonomous driving, named as SODA10M, containing 10 million unlabeled images and 20K images labeled with 6 representative object categories.
To improve diversity, the images are collected every ten seconds per frame within 32 different cities under different weather conditions, periods and location scenes.
We provide extensive experiments and deep analyses of existing supervised state-of-the-art detection models, popular self-supervised and semi-supervised approaches, and some insights about how to develop future models.
arXiv Detail & Related papers (2021-06-21T13:55:57Z) - Fine-Grained Vehicle Perception via 3D Part-Guided Visual Data
Augmentation [77.60050239225086]
We propose an effective training data generation process by fitting a 3D car model with dynamic parts to vehicles in real images.
Our approach is fully automatic without any human interaction.
We present a multi-task network for VUS parsing and a multi-stream network for VHI parsing.
arXiv Detail & Related papers (2020-12-15T03:03:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.