SAMA-VTOL: A new unmanned aircraft system for remotely sensed data
collection
- URL: http://arxiv.org/abs/2011.11007v1
- Date: Sun, 22 Nov 2020 12:55:16 GMT
- Title: SAMA-VTOL: A new unmanned aircraft system for remotely sensed data
collection
- Authors: Mohammad R. Bayanlou, Mehdi Khoshboresh-Masouleh
- Abstract summary: The capability of SAMA-VTOL is investigated for generating orthophoto.
The Pix4Dmapper software was used to orientate the images, produce point clouds, creating digital surface model and generating orthophoto mosaic.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In recent years, unmanned aircraft systems (UASs) are frequently used in many
different applications of photogrammetry such as building damage monitoring,
archaeological mapping and vegetation monitoring. In this paper, a new
state-of-the-art vertical take-off and landing fixed-wing UAS is proposed to
robust photogrammetry missions, called SAMA-VTOL. In this study, the capability
of SAMA-VTOL is investigated for generating orthophoto. The major stages are
including designing, building and experimental scenario. First, a brief
description of design and build is introduced. Next, an experiment was done to
generate accurate orthophoto with minimum ground control points requirements.
The processing step, which includes automatic aerial triangulation with camera
calibration and model generation. In this regard, the Pix4Dmapper software was
used to orientate the images, produce point clouds, creating digital surface
model and generating orthophoto mosaic. Experimental results based on the test
area covering 26.3 hectares indicate that our SAMA-VTOL performs well in the
orthophoto mosaic task.
Related papers
- Evaluation of Flight Parameters in UAV-based 3D Reconstruction for Rooftop Infrastructure Assessment [0.08192907805418585]
Rooftop 3D reconstruction using UAV-based photogrammetry offers a promising solution for infrastructure assessment.
Existing methods often require high percentages of image overlap and extended flight times to ensure model accuracy when using autonomous flight paths.
This study systematically evaluates key flight parameters-ground sampling distance (GSD) and image overlap-to optimize the 3D reconstruction of complex rooftop infrastructure.
arXiv Detail & Related papers (2025-04-02T19:43:20Z) - SatVision-TOA: A Geospatial Foundation Model for Coarse-Resolution All-Sky Remote Sensing Imagery [8.096413986108601]
We introduce SatVision-TOA, a novel foundation model pre-trained on 14-band MODIS L1B Top-Of-Atmosphere (TOA) radiance imagery.
The SatVision-TOA model is pre-trained using a Masked-Image-Modeling (MIM) framework and the SwinV2 architecture.
Results show that SatVision-TOA achieves superior performance over baseline methods on downstream tasks.
arXiv Detail & Related papers (2024-11-26T00:08:00Z) - Very High-Resolution Bridge Deformation Monitoring Using UAV-based Photogrammetry [0.0]
In this contribution, we address the question of the suitability of UAV-based monitoring for structural health monitoring (SHM)
A research reinforced concrete bridge can be exposed to a predefined load via ground anchors.
Very high-resolution image blocks have been captured before, during, and after the application of controlled loads.
Dense image point clouds were computed to evaluate the performance of surface-based data acquisition.
We show that by employing the introduced UAV-based monitoring approach, a full area-wide quantification of deformation is possible in contrast to classical point or profile measurements.
arXiv Detail & Related papers (2024-10-09T08:17:03Z) - XLD: A Cross-Lane Dataset for Benchmarking Novel Driving View Synthesis [84.23233209017192]
This paper presents a novel driving view synthesis dataset and benchmark specifically designed for autonomous driving simulations.
The dataset is unique as it includes testing images captured by deviating from the training trajectory by 1-4 meters.
We establish the first realistic benchmark for evaluating existing NVS approaches under front-only and multi-camera settings.
arXiv Detail & Related papers (2024-06-26T14:00:21Z) - VFMM3D: Releasing the Potential of Image by Vision Foundation Model for Monocular 3D Object Detection [80.62052650370416]
monocular 3D object detection holds significant importance across various applications, including autonomous driving and robotics.
In this paper, we present VFMM3D, an innovative framework that leverages the capabilities of Vision Foundation Models (VFMs) to accurately transform single-view images into LiDAR point cloud representations.
arXiv Detail & Related papers (2024-04-15T03:12:12Z) - On the Generation of a Synthetic Event-Based Vision Dataset for
Navigation and Landing [69.34740063574921]
This paper presents a methodology for generating event-based vision datasets from optimal landing trajectories.
We construct sequences of photorealistic images of the lunar surface with the Planet and Asteroid Natural Scene Generation Utility.
We demonstrate that the pipeline can generate realistic event-based representations of surface features by constructing a dataset of 500 trajectories.
arXiv Detail & Related papers (2023-08-01T09:14:20Z) - Real Time Incremental Image Mosaicking Without Use of Any Camera
Parameter [1.2891210250935146]
This paper proposes a UAV-based system for real-time creation of incremental mosaics.
Inspired by previous approaches, in the mosaicking process, feature extraction from images, matching of similar key points between images, finding homography matrix to warp and align images, and blending images to obtain mosaics better looking.
arXiv Detail & Related papers (2022-12-05T14:28:54Z) - A benchmark dataset for deep learning-based airplane detection: HRPlanes [3.5297361401370044]
We create a novel airplane detection dataset called High Resolution Planes (HRPlanes) by using images from Google Earth (GE)
HRPlanes include GE images of several different airports across the world to represent a variety of landscape, seasonal and satellite geometry conditions obtained from different satellites.
Our preliminary results show that the proposed dataset can be a valuable data source and benchmark data set for future applications.
arXiv Detail & Related papers (2022-04-22T23:49:44Z) - Deep Learning for Real Time Satellite Pose Estimation on Low Power Edge
TPU [58.720142291102135]
In this paper we propose a pose estimation software exploiting neural network architectures.
We show how low power machine learning accelerators could enable Artificial Intelligence exploitation in space.
arXiv Detail & Related papers (2022-04-07T08:53:18Z) - Rethinking Drone-Based Search and Rescue with Aerial Person Detection [79.76669658740902]
The visual inspection of aerial drone footage is an integral part of land search and rescue (SAR) operations today.
We propose a novel deep learning algorithm to automate this aerial person detection (APD) task.
We present the novel Aerial Inspection RetinaNet (AIR) algorithm as the combination of these contributions.
arXiv Detail & Related papers (2021-11-17T21:48:31Z) - Planetary UAV localization based on Multi-modal Registration with
Pre-existing Digital Terrain Model [0.5156484100374058]
We propose a multi-modal registration based SLAM algorithm, which estimates the location of a planet UAV using a nadir view camera on the UAV.
To overcome the scale and appearance difference between on-board UAV images and pre-installed digital terrain model, a theoretical model is proposed to prove that topographic features of UAV image and DEM can be correlated in frequency domain via cross power spectrum.
To test the robustness and effectiveness of the proposed localization algorithm, a new cross-source drone-based localization dataset for planetary exploration is proposed.
arXiv Detail & Related papers (2021-06-24T02:54:01Z) - OpenREALM: Real-time Mapping for Unmanned Aerial Vehicles [62.997667081978825]
OpenREALM is a real-time mapping framework for Unmanned Aerial Vehicles (UAVs)
Different modes of operation allow OpenREALM to perform simple stitching assuming an approximate plane ground.
In all modes incremental progress of the resulting map can be viewed live by an operator on the ground.
arXiv Detail & Related papers (2020-09-22T12:28:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.