Computer Vision for Carriers: PATRIOT
- URL: http://arxiv.org/abs/2311.15914v1
- Date: Mon, 27 Nov 2023 15:23:25 GMT
- Title: Computer Vision for Carriers: PATRIOT
- Authors: Ari Goodman, Gurpreet Singh, James Hing, Ryan O'Shea
- Abstract summary: PATRIOT is a prototype system which takes existing camera feeds, calculates aircraft poses, and updates a virtual Ouija board interface with the current status of the assets.
Software was tested with synthetic and real-world data and was able to accurately extract the pose of assets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deck tracking performed on carriers currently involves a team of sailors
manually identifying aircraft and updating a digital user interface called the
Ouija Board. Improvements to the deck tracking process would result in
increased Sortie Generation Rates, and therefore applying automation is seen as
a critical method to improve deck tracking. However, the requirements on a
carrier ship do not allow for the installation of hardware-based location
sensing technologies like Global Positioning System (GPS) sensors. PATRIOT
(Panoramic Asset Tracking of Real-Time Information for the Ouija Tabletop) is a
research effort and proposed solution to performing deck tracking with passive
sensing and without the need for GPS sensors. PATRIOT is a prototype system
which takes existing camera feeds, calculates aircraft poses, and updates a
virtual Ouija board interface with the current status of the assets. PATRIOT
would allow for faster, more accurate, and less laborious asset tracking for
aircraft, people, and support equipment. PATRIOT is anticipated to benefit the
warfighter by reducing cognitive workload, reducing manning requirements,
collecting data to improve logistics, and enabling an automation gateway for
future efforts to improve efficiency and safety. The authors have developed and
tested algorithms to perform pose estimations of assets in real-time including
OpenPifPaf, High-Resolution Network (HRNet), HigherHRNet (HHRNet), Faster
R-CNN, and in-house developed encoder-decoder network. The software was tested
with synthetic and real-world data and was able to accurately extract the pose
of assets. Fusion, tracking, and real-world generality are planned to be
improved to ensure a successful transition to the fleet.
Related papers
- A Cross-Scene Benchmark for Open-World Drone Active Tracking [54.235808061746525]
Drone Visual Active Tracking aims to autonomously follow a target object by controlling the motion system based on visual observations.
We propose a unified cross-scene cross-domain benchmark for open-world drone active tracking called DAT.
We also propose a reinforcement learning-based drone tracking method called R-VAT.
arXiv Detail & Related papers (2024-12-01T09:37:46Z) - LIFT OFF: LoRaWAN Installation and Fiducial Tracking Operations for the
Flightline of the Future [0.0]
LIFT OFF successfully provided a real-time updating map of all tracked assets using GPS sensors for people and support equipment and with visual fiducials for aircraft.
Future follow-on work is anticipated to apply the technology to other environments including carriers and amphibious assault ships.
arXiv Detail & Related papers (2023-11-27T15:22:17Z) - MSight: An Edge-Cloud Infrastructure-based Perception System for
Connected Automated Vehicles [58.461077944514564]
This paper presents MSight, a cutting-edge roadside perception system specifically designed for automated vehicles.
MSight offers real-time vehicle detection, localization, tracking, and short-term trajectory prediction.
Evaluations underscore the system's capability to uphold lane-level accuracy with minimal latency.
arXiv Detail & Related papers (2023-10-08T21:32:30Z) - VBSF-TLD: Validation-Based Approach for Soft Computing-Inspired Transfer
Learning in Drone Detection [0.0]
This paper presents a transfer-based drone detection scheme, which forms an integral part of a computer vision-based module.
By harnessing the knowledge of pre-trained models from a related domain, transfer learning enables improved results even with limited training data.
Notably, the scheme's effectiveness is highlighted by its IOU-based validation results.
arXiv Detail & Related papers (2023-06-11T22:30:23Z) - Siamese Object Tracking for Unmanned Aerial Vehicle: A Review and
Comprehensive Analysis [15.10348491862546]
Unmanned aerial vehicle (UAV)-based visual object tracking has enabled a wide range of applications.
Siamese networks shine in visual object tracking with their promising balance of accuracy, robustness, and speed.
arXiv Detail & Related papers (2022-05-09T13:53:34Z) - Scalable and Real-time Multi-Camera Vehicle Detection,
Re-Identification, and Tracking [58.95210121654722]
We propose a real-time city-scale multi-camera vehicle tracking system that handles real-world, low-resolution CCTV instead of idealized and curated video streams.
Our method is ranked among the top five performers on the public leaderboard.
arXiv Detail & Related papers (2022-04-15T12:47:01Z) - Deep Learning for Real Time Satellite Pose Estimation on Low Power Edge
TPU [58.720142291102135]
In this paper we propose a pose estimation software exploiting neural network architectures.
We show how low power machine learning accelerators could enable Artificial Intelligence exploitation in space.
arXiv Detail & Related papers (2022-04-07T08:53:18Z) - ADAPT: An Open-Source sUAS Payload for Real-Time Disaster Prediction and
Response with AI [55.41644538483948]
Small unmanned aircraft systems (sUAS) are becoming prominent components of many humanitarian assistance and disaster response operations.
We have developed the free and open-source ADAPT multi-mission payload for deploying real-time AI and computer vision onboard a sUAS.
We demonstrate the example mission of real-time, in-flight ice segmentation to monitor river ice state and provide timely predictions of catastrophic flooding events.
arXiv Detail & Related papers (2022-01-25T14:51:19Z) - Autonomous Aerial Robot for High-Speed Search and Intercept Applications [86.72321289033562]
A fully-autonomous aerial robot for high-speed object grasping has been proposed.
As an additional sub-task, our system is able to autonomously pierce balloons located in poles close to the surface.
Our approach has been validated in a challenging international competition and has shown outstanding results.
arXiv Detail & Related papers (2021-12-10T11:49:51Z) - Coupling Vision and Proprioception for Navigation of Legged Robots [65.59559699815512]
We exploit the complementary strengths of vision and proprioception to achieve point goal navigation in a legged robot.
We show superior performance compared to wheeled robot (LoCoBot) baselines.
We also show the real-world deployment of our system on a quadruped robot with onboard sensors and compute.
arXiv Detail & Related papers (2021-12-03T18:59:59Z) - High-Speed Robot Navigation using Predicted Occupancy Maps [0.0]
We study algorithmic approaches that allow the robot to predict spaces extending beyond the sensor horizon for robust planning at high speeds.
We accomplish this using a generative neural network trained from real-world data without requiring human annotated labels.
We extend our existing control algorithms to support leveraging the predicted spaces to improve collision-free planning and navigation at high speeds.
arXiv Detail & Related papers (2020-12-22T16:25:12Z) - AutoSOS: Towards Multi-UAV Systems Supporting Maritime Search and Rescue
with Lightweight AI and Edge Computing [27.15999421608932]
This paper presents the research directions of the AutoSOS project, where we work in the development of an autonomous multi-robot search and rescue assistance platform.
The platform is meant to perform reconnaissance missions for initial assessment of the environment using novel adaptive deep learning algorithms.
When drones find potential objects, they will send their sensor data to the vessel to verity the findings with increased accuracy.
arXiv Detail & Related papers (2020-05-07T12:22:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.