Spatio-temporal-spectral-angular observation model that integrates
observations from UAV and mobile mapping vehicle for better urban mapping
- URL: http://arxiv.org/abs/2109.00900v1
- Date: Tue, 24 Aug 2021 02:58:12 GMT
- Title: Spatio-temporal-spectral-angular observation model that integrates
observations from UAV and mobile mapping vehicle for better urban mapping
- Authors: Zhenfeng Shao, Gui Cheng, Deren Li, Xiao Huang, Zhipeng Lu, Jian Liu
- Abstract summary: In a complex urban scene, observation from a single sensor leads to voids in observations, failing to describe urban objects in a comprehensive manner.
We propose asource-spectral-angular observation model to integrate observations from UAV and mobile mapping vehicle, realizing a joint, coordinated observation from both air and ground.
- Score: 10.670246699899023
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In a complex urban scene, observation from a single sensor unavoidably leads
to voids in observations, failing to describe urban objects in a comprehensive
manner. In this paper, we propose a spatio-temporal-spectral-angular
observation model to integrate observations from UAV and mobile mapping vehicle
platform, realizing a joint, coordinated observation operation from both air
and ground. We develop a multi-source remote sensing data acquisition system to
effectively acquire multi-angle data of complex urban scenes. Multi-source data
fusion solves the missing data problem caused by occlusion and achieves
accurate, rapid, and complete collection of holographic spatial and temporal
information in complex urban scenes. We carried out an experiment on Baisha
Town, Chongqing, China and obtained multi-sensor, multi-angle data from UAV and
mobile mapping vehicle. We first extracted the point cloud from UAV and then
integrated the UAV and mobile mapping vehicle point cloud. The integrated
results combined both the characteristic of UAV and mobile mapping vehicle
point cloud, confirming the practicability of the proposed joint data
acquisition platform and the effectiveness of spatio-temporal-spectral-angular
observation model. Compared with the observation from UAV or mobile mapping
vehicle alone, the integrated system provides an effective data acquisition
solution towards comprehensive urban monitoring.
Related papers
- UAV (Unmanned Aerial Vehicles): Diverse Applications of UAV Datasets in Segmentation, Classification, Detection, and Tracking [0.0]
Unmanned Aerial Vehicles (UAVs) have revolutionized the process of gathering and analyzing data in diverse research domains.
UAV datasets consist of various types of data, such as satellite imagery, images captured by drones, and videos.
These datasets play a crucial role in disaster damage assessment, aerial surveillance, object recognition, and tracking.
arXiv Detail & Related papers (2024-09-05T04:47:36Z) - UAV-Based Human Body Detector Selection and Fusion for Geolocated Saliency Map Generation [0.2499907423888049]
The problem of reliably detecting and geolocating objects of different classes in soft real-time is essential in many application areas, such as Search and Rescue performed using Unmanned Aerial Vehicles (UAVs)
This research addresses the complementary problems of system contextual vision-based detector selection, allocation, and execution.
The detection results are fused using a method for building maps of salient locations which takes advantage of a novel sensor model for vision-based detections for both positive and negative observations.
arXiv Detail & Related papers (2024-08-29T13:00:37Z) - Automatic UAV-based Airport Pavement Inspection Using Mixed Real and
Virtual Scenarios [3.0874677990361246]
We propose a vision-based approach to automatically identify pavement distress using images captured by UAVs.
The proposed method is based on Deep Learning (DL) to segment defects in the image.
We demonstrate that the use of a mixed dataset composed of synthetic and real training images yields better results when testing the training models in real application scenarios.
arXiv Detail & Related papers (2024-01-11T16:30:07Z) - Voila-A: Aligning Vision-Language Models with User's Gaze Attention [56.755993500556734]
We introduce gaze information as a proxy for human attention to guide Vision-Language Models (VLMs)
We propose a novel approach, Voila-A, for gaze alignment to enhance the interpretability and effectiveness of these models in real-world applications.
arXiv Detail & Related papers (2023-12-22T17:34:01Z) - Street-View Image Generation from a Bird's-Eye View Layout [95.36869800896335]
Bird's-Eye View (BEV) Perception has received increasing attention in recent years.
Data-driven simulation for autonomous driving has been a focal point of recent research.
We propose BEVGen, a conditional generative model that synthesizes realistic and spatially consistent surrounding images.
arXiv Detail & Related papers (2023-01-11T18:39:34Z) - Towards Scale Consistent Monocular Visual Odometry by Learning from the
Virtual World [83.36195426897768]
We propose VRVO, a novel framework for retrieving the absolute scale from virtual data.
We first train a scale-aware disparity network using both monocular real images and stereo virtual data.
The resulting scale-consistent disparities are then integrated with a direct VO system.
arXiv Detail & Related papers (2022-03-11T01:51:54Z) - Vision-Based UAV Self-Positioning in Low-Altitude Urban Environments [20.69412701553767]
Unmanned Aerial Vehicles (UAVs) rely on satellite systems for stable positioning.
In such situations, vision-based techniques can serve as an alternative, ensuring the self-positioning capability of UAVs.
This paper presents a new dataset, DenseUAV, which is the first publicly available dataset designed for the UAV self-positioning task.
arXiv Detail & Related papers (2022-01-23T07:18:55Z) - Large-scale Autonomous Flight with Real-time Semantic SLAM under Dense
Forest Canopy [48.51396198176273]
We propose an integrated system that can perform large-scale autonomous flights and real-time semantic mapping in challenging under-canopy environments.
We detect and model tree trunks and ground planes from LiDAR data, which are associated across scans and used to constrain robot poses as well as tree trunk models.
A drift-compensation mechanism is designed to minimize the odometry drift using semantic SLAM outputs in real time, while maintaining planner optimality and controller stability.
arXiv Detail & Related papers (2021-09-14T07:24:53Z) - LiveMap: Real-Time Dynamic Map in Automotive Edge Computing [14.195521569220448]
LiveMap is a real-time dynamic map that detects, matches, and tracks objects on the road with crowdsourcing data from connected vehicles in sub-second.
We develop the control plane of LiveMap that allows adaptive offloading of vehicle computations.
We implement LiveMap on a small-scale testbed and develop a large-scale network simulator.
arXiv Detail & Related papers (2020-12-16T15:00:49Z) - Perceiving Traffic from Aerial Images [86.994032967469]
We propose an object detection method called Butterfly Detector that is tailored to detect objects in aerial images.
We evaluate our Butterfly Detector on two publicly available UAV datasets (UAVDT and VisDrone 2019) and show that it outperforms previous state-of-the-art methods while remaining real-time.
arXiv Detail & Related papers (2020-09-16T11:37:43Z) - SoDA: Multi-Object Tracking with Soft Data Association [75.39833486073597]
Multi-object tracking (MOT) is a prerequisite for a safe deployment of self-driving cars.
We propose a novel approach to MOT that uses attention to compute track embeddings that encode dependencies between observed objects.
arXiv Detail & Related papers (2020-08-18T03:40:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.