Image and AIS Data Fusion Technique for Maritime Computer Vision
Applications
- URL: http://arxiv.org/abs/2312.05270v1
- Date: Thu, 7 Dec 2023 20:54:49 GMT
- Title: Image and AIS Data Fusion Technique for Maritime Computer Vision
Applications
- Authors: Emre G\"ulsoylu, Paul Koch, Mert Y{\i}ld{\i}z, Manfred Constapel and
Andr\'e Peter Kelm
- Abstract summary: We develop a technique that fuses Automatic Identification System (AIS) data with vessels detected in images to create datasets.
Our approach associates detected ships to their corresponding AIS messages by estimating distance and azimuth.
This technique is useful for creating datasets for waterway traffic management, encounter detection, and surveillance.
- Score: 1.482087972733629
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning object detection methods, like YOLOv5, are effective in
identifying maritime vessels but often lack detailed information important for
practical applications. In this paper, we addressed this problem by developing
a technique that fuses Automatic Identification System (AIS) data with vessels
detected in images to create datasets. This fusion enriches ship images with
vessel-related data, such as type, size, speed, and direction. Our approach
associates detected ships to their corresponding AIS messages by estimating
distance and azimuth using a homography-based method suitable for both fixed
and periodically panning cameras. This technique is useful for creating
datasets for waterway traffic management, encounter detection, and
surveillance. We introduce a novel dataset comprising of images taken in
various weather conditions and their corresponding AIS messages. This dataset
offers a stable baseline for refining vessel detection algorithms and
trajectory prediction models. To assess our method's performance, we manually
annotated a portion of this dataset. The results are showing an overall
association accuracy of 74.76 %, with the association accuracy for fixed
cameras reaching 85.06 %. This demonstrates the potential of our approach in
creating datasets for vessel detection, pose estimation and auto-labelling
pipelines.
Related papers
- Innovative Horizons in Aerial Imagery: LSKNet Meets DiffusionDet for
Advanced Object Detection [55.2480439325792]
We present an in-depth evaluation of an object detection model that integrates the LSKNet backbone with the DiffusionDet head.
The proposed model achieves a mean average precision (MAP) of approximately 45.7%, which is a significant improvement.
This advancement underscores the effectiveness of the proposed modifications and sets a new benchmark in aerial image analysis.
arXiv Detail & Related papers (2023-11-21T19:49:13Z) - Improving Online Lane Graph Extraction by Object-Lane Clustering [106.71926896061686]
We propose an architecture and loss formulation to improve the accuracy of local lane graph estimates.
The proposed method learns to assign the objects to centerlines by considering the centerlines as cluster centers.
We show that our method can achieve significant performance improvements by using the outputs of existing 3D object detection methods.
arXiv Detail & Related papers (2023-07-20T15:21:28Z) - Unlocking the Use of Raw Multispectral Earth Observation Imagery for Onboard Artificial Intelligence [3.3810628880631226]
This work presents a novel methodology to automate the creation of datasets for the detection of target events.
The presented approach first processes the raw data by applying a pipeline consisting of spatial band registration and georeferencing.
It detects the target events by leveraging event-specific state-of-the-art algorithms on the Level-1C products.
We apply the proposed methodology to realize THRawS (Thermal Hotspots in Raw Sentinel-2 data), the first dataset of Sentinel-2 raw data containing warm thermal hotspots.
arXiv Detail & Related papers (2023-05-12T09:54:21Z) - Multimodal Dataset from Harsh Sub-Terranean Environment with Aerosol
Particles for Frontier Exploration [55.41644538483948]
This paper introduces a multimodal dataset from the harsh and unstructured underground environment with aerosol particles.
It contains synchronized raw data measurements from all onboard sensors in Robot Operating System (ROS) format.
The focus of this paper is not only to capture both temporal and spatial data diversities but also to present the impact of harsh conditions on captured data.
arXiv Detail & Related papers (2023-04-27T20:21:18Z) - Asynchronous Trajectory Matching-Based Multimodal Maritime Data Fusion
for Vessel Traffic Surveillance in Inland Waterways [12.7548343467665]
The automatic identification system (AIS) and video cameras have been widely exploited for vessel traffic surveillance in inland waterways.
We propose a deep learning-enabled asynchronous trajectory matching method (named DeepSORVF) to fuse the AIS-based vessel information with the corresponding visual targets.
In addition, by combining the AIS- and video-based movement features, we also present a prior knowledge-driven anti-occlusion method to yield accurate and robust vessel tracking results.
arXiv Detail & Related papers (2023-02-22T11:00:34Z) - SimuShips -- A High Resolution Simulation Dataset for Ship Detection
with Precise Annotations [0.0]
State-of-the-art obstacle detection algorithms are based on convolutional neural networks (CNNs)
SimuShips is a publicly available simulation-based dataset for maritime environments.
arXiv Detail & Related papers (2022-09-22T07:33:31Z) - A Multi-purpose Real Haze Benchmark with Quantifiable Haze Levels and
Ground Truth [61.90504318229845]
This paper introduces the first paired real image benchmark dataset with hazy and haze-free images, and in-situ haze density measurements.
This dataset was produced in a controlled environment with professional smoke generating machines that covered the entire scene.
A subset of this dataset has been used for the Object Detection in Haze Track of CVPR UG2 2022 challenge.
arXiv Detail & Related papers (2022-06-13T19:14:06Z) - Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object
Detection [58.81316192862618]
Two critical sensors for 3D perception in autonomous driving are the camera and the LiDAR.
fusing these two modalities can significantly boost the performance of 3D perception models.
We benchmark the state-of-the-art fusion methods for the first time.
arXiv Detail & Related papers (2022-05-30T09:35:37Z) - Automated System for Ship Detection from Medium Resolution Satellite
Optical Imagery [3.190574537106449]
We present a ship detection pipeline for low-cost medium resolution satellite optical imagery obtained from ESA Sentinel-2 and Planet Labs Dove constellations.
This optical satellite imagery is readily available for any place on Earth and underutilized in the maritime domain, compared to existing solutions based on synthetic-aperture radar (SAR) imagery.
arXiv Detail & Related papers (2021-04-28T15:06:18Z) - Depth Estimation from Monocular Images and Sparse Radar Data [93.70524512061318]
In this paper, we explore the possibility of achieving a more accurate depth estimation by fusing monocular images and Radar points using a deep neural network.
We find that the noise existing in Radar measurements is one of the main key reasons that prevents one from applying the existing fusion methods.
The experiments are conducted on the nuScenes dataset, which is one of the first datasets which features Camera, Radar, and LiDAR recordings in diverse scenes and weather conditions.
arXiv Detail & Related papers (2020-09-30T19:01:33Z) - Semantic sensor fusion: from camera to sparse lidar information [7.489722641968593]
This paper presents an approach to fuse different sensory information, Light Detection and Ranging (lidar) scans and camera images.
The transference of semantic information between the labelled image and the lidar point cloud is performed in four steps.
arXiv Detail & Related papers (2020-03-04T03:09:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.