Vehicle Perception from Satellite
- URL: http://arxiv.org/abs/2402.00703v1
- Date: Thu, 1 Feb 2024 15:59:16 GMT
- Title: Vehicle Perception from Satellite
- Authors: Bin Zhao, Pengfei Han, and Xuelong Li
- Abstract summary: The dataset is constructed based on 12 satellite videos and 14 synthetic videos recorded from GTA-V.
It supports several tasks, including tiny object detection, counting and density estimation.
128,801 vehicles are annotated totally, and the number of vehicles in each image varies from 0 to 101.
- Score: 54.07157185000604
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Satellites are capable of capturing high-resolution videos. It makes vehicle
perception from satellite become possible. Compared to street surveillance,
drive recorder or other equipments, satellite videos provide a much broader
city-scale view, so that the global dynamic scene of the traffic are captured
and displayed. Traffic monitoring from satellite is a new task with great
potential applications, including traffic jams prediction, path planning,
vehicle dispatching, \emph{etc.}. Practically, limited by the resolution and
view, the captured vehicles are very tiny (a few pixels) and move slowly. Worse
still, these satellites are in Low Earth Orbit (LEO) to capture such
high-resolution videos, so the background is also moving. Under this
circumstance, traffic monitoring from the satellite view is an extremely
challenging task. To attract more researchers into this field, we build a
large-scale benchmark for traffic monitoring from satellite. It supports
several tasks, including tiny object detection, counting and density
estimation. The dataset is constructed based on 12 satellite videos and 14
synthetic videos recorded from GTA-V. They are separated into 408 video clips,
which contain 7,336 real satellite images and 1,960 synthetic images. 128,801
vehicles are annotated totally, and the number of vehicles in each image varies
from 0 to 101. Several classic and state-of-the-art approaches in traditional
computer vision are evaluated on the datasets, so as to compare the performance
of different approaches, analyze the challenges in this task, and discuss the
future prospects. The dataset is available at:
https://github.com/Chenxi1510/Vehicle-Perception-from-Satellite-Videos.
Related papers
- Vehicle Vectors and Traffic Patterns from Planet Imagery [4.013337799051293]
We show that both static and moving cars can be identified reliably in high-resolution SkySat imagery.
We are able to estimate the speed and heading of moving vehicles by leveraging the inter-band displacement (or "rainbow" effect) of moving objects.
arXiv Detail & Related papers (2024-06-10T14:35:59Z) - The Interstate-24 3D Dataset: a new benchmark for 3D multi-camera
vehicle tracking [4.799822253865053]
This work presents a novel video dataset recorded from overlapping highway traffic cameras along an urban interstate, enabling multi-camera 3D object tracking in a traffic monitoring context.
Data is released from 3 scenes containing video from at least 16 cameras each, totaling 57 minutes in length.
877,000 3D bounding boxes and corresponding object tracklets are fully and accurately annotated for each camera field of view and are combined into a spatially and temporally continuous set of vehicle trajectories for each scene.
arXiv Detail & Related papers (2023-08-28T18:43:33Z) - Street-View Image Generation from a Bird's-Eye View Layout [95.36869800896335]
Bird's-Eye View (BEV) Perception has received increasing attention in recent years.
Data-driven simulation for autonomous driving has been a focal point of recent research.
We propose BEVGen, a conditional generative model that synthesizes realistic and spatially consistent surrounding images.
arXiv Detail & Related papers (2023-01-11T18:39:34Z) - VPAIR -- Aerial Visual Place Recognition and Localization in Large-scale
Outdoor Environments [49.82314641876602]
We present a new dataset named VPAIR.
The dataset was recorded on board a light aircraft flying at an altitude of more than 300 meters above ground.
The dataset covers a more than one hundred kilometers long trajectory over various types of challenging landscapes.
arXiv Detail & Related papers (2022-05-23T18:50:08Z) - Scalable and Real-time Multi-Camera Vehicle Detection,
Re-Identification, and Tracking [58.95210121654722]
We propose a real-time city-scale multi-camera vehicle tracking system that handles real-world, low-resolution CCTV instead of idealized and curated video streams.
Our method is ranked among the top five performers on the public leaderboard.
arXiv Detail & Related papers (2022-04-15T12:47:01Z) - Deep Vehicle Detection in Satellite Video [0.0]
Vehicle detection is perhaps impossible in single satellite images due to the tininess of vehicles (4 pixel) and their similarity to the background EO.
A new model of a compact $3 times 3$ neural network is proposed which neglects pooling layers and uses leaky ReLUs.
Empirical results on two new annotated satellite videos reconfirm the applicability of this approach for vehicle detection.
arXiv Detail & Related papers (2022-04-14T08:54:44Z) - Detecting and Tracking Small and Dense Moving Objects in Satellite
Videos: A Benchmark [30.078513715446196]
We build a large-scale satellite video dataset with rich annotations for the task of moving object detection and tracking.
This dataset is collected by the Jilin-1 satellite constellation.
We establish the first public benchmark for moving object detection and tracking in satellite videos.
arXiv Detail & Related papers (2021-11-25T08:01:41Z) - Detection, Tracking, and Counting Meets Drones in Crowds: A Benchmark [97.07865343576361]
We construct a benchmark with a new drone-captured largescale dataset, named as DroneCrowd.
We annotate 20,800 people trajectories with 4.8 million heads and several video-level attributes.
We design the Space-Time Neighbor-Aware Network (STNNet) as a strong baseline to solve object detection, tracking and counting jointly in dense crowds.
arXiv Detail & Related papers (2021-05-06T04:46:14Z) - On Learning Vehicle Detection in Satellite Video [0.0]
Vehicle detection in aerial and satellite images is still challenging due to their tiny appearance in pixels compared to the overall size of remote sensing imagery.
This work proposes to apply recent work on deep learning for wide-area motion imagery (WAMI) on satellite video.
arXiv Detail & Related papers (2020-01-29T15:35:16Z) - Detection and Tracking Meet Drones Challenge [131.31749447313197]
This paper presents a review of object detection and tracking datasets and benchmarks, and discusses the challenges of collecting large-scale drone-based object detection and tracking datasets with manual annotations.
We describe our VisDrone dataset, which is captured over various urban/suburban areas of 14 different cities across China from North to South.
We provide a detailed analysis of the current state of the field of large-scale object detection and tracking on drones, and conclude the challenge as well as propose future directions.
arXiv Detail & Related papers (2020-01-16T00:11:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.