Through-Foliage Tracking with Airborne Optical Sectioning
- URL: http://arxiv.org/abs/2111.06959v1
- Date: Fri, 12 Nov 2021 21:54:25 GMT
- Title: Through-Foliage Tracking with Airborne Optical Sectioning
- Authors: Rakesh John Amala Arokia Nathan, Indrajit Kurmi, David C. Schedl and
Oliver Bimber
- Abstract summary: We present an initial light-weight and drone-operated 1D camera array that supports parallel synthetic aperture aerial imaging.
We demonstrate, that these two contributions can lead to the detection and tracking of moving people through densely occluding forest.
- Score: 1.8352113484137622
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Detecting and tracking moving targets through foliage is difficult, and for
many cases even impossible in regular aerial images and videos. We present an
initial light-weight and drone-operated 1D camera array that supports parallel
synthetic aperture aerial imaging. Our main finding is that color anomaly
detection benefits significantly from image integration when compared to
conventional single images or video frames (on average 97% vs. 42% in precision
in our field experiments). We demonstrate, that these two contributions can
lead to the detection and tracking of moving people through densely occluding
forest
Related papers
- Multiview Aerial Visual Recognition (MAVREC): Can Multi-view Improve
Aerial Visual Perception? [57.77643186237265]
We present Multiview Aerial Visual RECognition or MAVREC, a video dataset where we record synchronized scenes from different perspectives.
MAVREC consists of around 2.5 hours of industry-standard 2.7K resolution video sequences, more than 0.5 million frames, and 1.1 million annotated bounding boxes.
This makes MAVREC the largest ground and aerial-view dataset, and the fourth largest among all drone-based datasets.
arXiv Detail & Related papers (2023-12-07T18:59:14Z) - Multi-Modal Neural Radiance Field for Monocular Dense SLAM with a
Light-Weight ToF Sensor [58.305341034419136]
We present the first dense SLAM system with a monocular camera and a light-weight ToF sensor.
We propose a multi-modal implicit scene representation that supports rendering both the signals from the RGB camera and light-weight ToF sensor.
Experiments demonstrate that our system well exploits the signals of light-weight ToF sensors and achieves competitive results.
arXiv Detail & Related papers (2023-08-28T07:56:13Z) - Synthetic Aperture Anomaly Imaging [2.9443230571766854]
We show that integrating detected anomalies is even more effective than detecting anomalies in integrals.
We present a real-time application that makes our findings practically available for blue-light organizations and others using commercial drone platforms.
arXiv Detail & Related papers (2023-04-26T14:34:43Z) - Towards Transformer-based Homogenization of Satellite Imagery for
Landsat-8 and Sentinel-2 [1.4699455652461728]
Landsat-8 (NASA) and Sentinel-2 (ESA) are two prominent multi-spectral imaging satellite projects that provide publicly available data.
This work provides a first glance at the possibility of using a transformer-based model to reduce the spectral and spatial differences between observations from both satellite projects.
arXiv Detail & Related papers (2022-10-14T09:13:34Z) - Inverse Airborne Optical Sectioning [4.640835690336653]
Inverse Airborne Optical Sectioning (IAOS) is an optical analogy to Inverse Synthetic Aperture Radar (ISAR)
Moving targets, such as walking people, that are heavily occluded by vegetation can be made visible and tracked with a stationary optical sensor.
arXiv Detail & Related papers (2022-07-27T07:57:24Z) - Combined Person Classification with Airborne Optical Sectioning [1.8352113484137622]
Fully autonomous drones have been demonstrated to find lost or injured persons under strongly occluding forest canopy.
Airborne Optical Sectioning (AOS), a novel synthetic aperture imaging technique, together with deep-learning-based classification enables high detection rates under realistic search-and-rescue conditions.
We demonstrate that false detections can be significantly suppressed and true detections boosted by combining classifications from multiple AOS rather than single integral images.
arXiv Detail & Related papers (2021-06-18T11:56:17Z) - Search and Rescue with Airborne Optical Sectioning [7.133136338850781]
We show that automated person detection can be significantly improved by combining multi-perspective images before classification.
Findings lay the foundation for effective future search and rescue technologies that can be applied in combination with autonomous or manned aircraft.
arXiv Detail & Related papers (2020-09-18T13:40:19Z) - Real-Time Drone Detection and Tracking With Visible, Thermal and
Acoustic Sensors [66.4525391417921]
A thermal infrared camera is shown to be a feasible solution to the drone detection task.
The detector performance as a function of the sensor-to-target distance is also investigated.
A novel video dataset containing 650 annotated infrared and visible videos of drones, birds, airplanes and helicopters is also presented.
arXiv Detail & Related papers (2020-07-14T23:06:42Z) - From two rolling shutters to one global shutter [57.431998188805665]
We explore a surprisingly simple camera configuration that makes it possible to undo the rolling shutter distortion.
Such a setup is easy and cheap to build and it possesses the geometric constraints needed to correct rolling shutter distortion.
We derive equations that describe the underlying geometry for general and special motions and present an efficient method for finding their solutions.
arXiv Detail & Related papers (2020-06-02T22:18:43Z) - Multi-Drone based Single Object Tracking with Agent Sharing Network [74.8198920355117]
Multi-Drone single Object Tracking dataset consists of 92 groups of video clips with 113,918 high resolution frames taken by two drones and 63 groups of video clips with 145,875 high resolution frames taken by three drones.
Agent sharing network (ASNet) is proposed by self-supervised template sharing and view-aware fusion of the target from multiple drones.
arXiv Detail & Related papers (2020-03-16T03:27:04Z) - Active Perception with A Monocular Camera for Multiscopic Vision [50.370074098619185]
We design a multiscopic vision system that utilizes a low-cost monocular RGB camera to acquire accurate depth estimation for robotic applications.
Unlike multi-view stereo with images captured at unconstrained camera poses, the proposed system actively controls a robot arm with a mounted camera to capture a sequence of images in horizontally or vertically aligned positions with the same parallax.
arXiv Detail & Related papers (2020-01-22T08:46:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.