LaRS: A Diverse Panoptic Maritime Obstacle Detection Dataset and
Benchmark
- URL: http://arxiv.org/abs/2308.09618v1
- Date: Fri, 18 Aug 2023 15:21:15 GMT
- Title: LaRS: A Diverse Panoptic Maritime Obstacle Detection Dataset and
Benchmark
- Authors: Lojze \v{Z}ust, Janez Per\v{s}, Matej Kristan
- Abstract summary: We present the first maritime panoptic obstacle detection benchmark LaRS, featuring scenes from Lakes, Rivers and Seas.
LaRS is composed of over 4000 per-pixel labeled key frames with nine preceding frames to allow utilization of the temporal texture.
We report the results of 27 semantic and panoptic segmentation methods, along with several performance insights and future research directions.
- Score: 9.864996020621701
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The progress in maritime obstacle detection is hindered by the lack of a
diverse dataset that adequately captures the complexity of general maritime
environments. We present the first maritime panoptic obstacle detection
benchmark LaRS, featuring scenes from Lakes, Rivers and Seas. Our major
contribution is the new dataset, which boasts the largest diversity in
recording locations, scene types, obstacle classes, and acquisition conditions
among the related datasets. LaRS is composed of over 4000 per-pixel labeled key
frames with nine preceding frames to allow utilization of the temporal texture,
amounting to over 40k frames. Each key frame is annotated with 8 thing, 3 stuff
classes and 19 global scene attributes. We report the results of 27 semantic
and panoptic segmentation methods, along with several performance insights and
future research directions. To enable objective evaluation, we have implemented
an online evaluation server. The LaRS dataset, evaluation toolkit and benchmark
are publicly available at: https://lojzezust.github.io/lars-dataset
Related papers
- Indiscernible Object Counting in Underwater Scenes [91.86044762367945]
Indiscernible object counting is the goal of which is to count objects that are blended with respect to their surroundings.
We present a large-scale dataset IOCfish5K which contains a total of 5,637 high-resolution images and 659,024 annotated center points.
arXiv Detail & Related papers (2023-04-23T15:09:02Z) - Navya3DSeg -- Navya 3D Semantic Segmentation Dataset & split generation
for autonomous vehicles [63.20765930558542]
3D semantic data are useful for core perception tasks such as obstacle detection and ego-vehicle localization.
We propose a new dataset, Navya 3D (Navya3DSeg), with a diverse label space corresponding to a large scale production grade operational domain.
It contains 23 labeled sequences and 25 supplementary sequences without labels, designed to explore self-supervised and semi-supervised semantic segmentation benchmarks on point clouds.
arXiv Detail & Related papers (2023-02-16T13:41:19Z) - MOSE: A New Dataset for Video Object Segmentation in Complex Scenes [106.64327718262764]
Video object segmentation (VOS) aims at segmenting a particular object throughout the entire video clip sequence.
The state-of-the-art VOS methods have achieved excellent performance (e.g., 90+% J&F) on existing datasets.
We collect a new VOS dataset called coMplex video Object SEgmentation (MOSE) to study the tracking and segmenting objects in complex environments.
arXiv Detail & Related papers (2023-02-03T17:20:03Z) - AVisT: A Benchmark for Visual Object Tracking in Adverse Visibility [125.77396380698639]
AVisT is a benchmark for visual tracking in diverse scenarios with adverse visibility.
AVisT comprises 120 challenging sequences with 80k annotated frames, spanning 18 diverse scenarios.
We benchmark 17 popular and recent trackers on AVisT with detailed analysis of their tracking performance across attributes.
arXiv Detail & Related papers (2022-08-14T17:49:37Z) - KOLOMVERSE: Korea open large-scale image dataset for object detection in the maritime universe [0.5732204366512352]
We present KOLOMVERSE, an open large-scale image dataset for object detection in the maritime domain by KRISO.
We collected 5,845 hours of video data captured from 21 territorial waters of South Korea.
The dataset has images of 3840$times$2160 pixels and to our knowledge, it is by far the largest publicly available dataset for object detection in the maritime domain.
arXiv Detail & Related papers (2022-06-20T16:45:12Z) - Visible-Thermal UAV Tracking: A Large-Scale Benchmark and New Baseline [80.13652104204691]
In this paper, we construct a large-scale benchmark with high diversity for visible-thermal UAV tracking (VTUAV)
We provide a coarse-to-fine attribute annotation, where frame-level attributes are provided to exploit the potential of challenge-specific trackers.
In addition, we design a new RGB-T baseline, named Hierarchical Multi-modal Fusion Tracker (HMFT), which fuses RGB-T data in various levels.
arXiv Detail & Related papers (2022-04-08T15:22:33Z) - The Marine Debris Dataset for Forward-Looking Sonar Semantic
Segmentation [5.1627181881873945]
This paper presents a novel dataset for marine debris segmentation collected using a Forward Looking Sonar (FLS)
The objects used to produce this dataset contain typical house-hold marine debris and distractor marine objects.
Performance of state of the art semantic segmentation architectures with a variety of encoders have been analyzed on this dataset.
arXiv Detail & Related papers (2021-08-15T19:29:23Z) - ABOShips -- An Inshore and Offshore Maritime Vessel Detection Dataset
with Precise Annotations [0.17205106391379021]
Maritime vessel detection of inshore and offshore datasets is no exception.
We collected a dataset of images of maritime vessels taking into account different factors.
Vessel instances (including 9 types of vessels), seamarks and miscellaneous floaters were precisely annotated.
We evaluated the the out-of-the-box performance of four prevalent object detection algorithms.
arXiv Detail & Related papers (2021-02-11T07:05:33Z) - A Realistic Fish-Habitat Dataset to Evaluate Algorithms for Underwater
Visual Analysis [2.6476746128312194]
We present DeepFish as a benchmark suite with a large-scale dataset to train and test methods for several computer vision tasks.
The dataset consists of approximately 40 thousand images collected underwater from 20 greenhabitats in the marine-environments of tropical Australia.
Our experiments provide an in-depth analysis of the dataset characteristics, and the performance evaluation of several state-of-the-art approaches.
arXiv Detail & Related papers (2020-08-28T12:20:59Z) - TAO: A Large-Scale Benchmark for Tracking Any Object [95.87310116010185]
Tracking Any Object dataset consists of 2,907 high resolution videos, captured in diverse environments, which are half a minute long on average.
We ask annotators to label objects that move at any point in the video, and give names to them post factum.
Our vocabulary is both significantly larger and qualitatively different from existing tracking datasets.
arXiv Detail & Related papers (2020-05-20T21:07:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.