Are we ready for beyond-application high-volume data? The Reeds robot
perception benchmark dataset
- URL: http://arxiv.org/abs/2109.08250v1
- Date: Thu, 16 Sep 2021 23:21:42 GMT
- Title: Are we ready for beyond-application high-volume data? The Reeds robot
perception benchmark dataset
- Authors: Ola Benderius and Christian Berger and Krister Blanch
- Abstract summary: This paper presents a dataset, called Reeds, for research on robot perception algorithms.
The dataset aims to provide demanding benchmark opportunities for algorithms, rather than providing an environment for testing application-specific solutions.
- Score: 3.781421673607643
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a dataset, called Reeds, for research on robot perception
algorithms. The dataset aims to provide demanding benchmark opportunities for
algorithms, rather than providing an environment for testing
application-specific solutions. A boat was selected as a logging platform in
order to provide highly dynamic kinematics. The sensor package includes six
high-performance vision sensors, two long-range lidars, radar, as well as GNSS
and an IMU. The spatiotemporal resolution of sensors were maximized in order to
provide large variations and flexibility in the data, offering evaluation at a
large number of different resolution presets based on the resolution found in
other datasets. Reeds also provides means of a fair and reproducible comparison
of algorithms, by running all evaluations on a common server backend. As the
dataset contains massive-scale data, the evaluation principle also serves as a
way to avoid moving data unnecessarily.
It was also found that naive evaluation of algorithms, where each evaluation
is computed sequentially, was not practical as the fetch and decode task of
each frame would not scale well. Instead, each frame is only decoded once and
then fed to all algorithms in parallel, including for GPU-based algorithms.
Related papers
- A Novel Adaptive Fine-Tuning Algorithm for Multimodal Models: Self-Optimizing Classification and Selection of High-Quality Datasets in Remote Sensing [46.603157010223505]
We propose an adaptive fine-tuning algorithm for multimodal large models.
We train the model on two 3090 GPU using one-third of the GeoChat multimodal remote sensing dataset.
The model achieved scores of 89.86 and 77.19 on the UCMerced and AID evaluation datasets.
arXiv Detail & Related papers (2024-09-20T09:19:46Z) - V-DETR: DETR with Vertex Relative Position Encoding for 3D Object
Detection [73.37781484123536]
We introduce a highly performant 3D object detector for point clouds using the DETR framework.
To address the limitation, we introduce a novel 3D Relative Position (3DV-RPE) method.
We show exceptional results on the challenging ScanNetV2 benchmark.
arXiv Detail & Related papers (2023-08-08T17:14:14Z) - Efficient Dataset Distillation Using Random Feature Approximation [109.07737733329019]
We propose a novel algorithm that uses a random feature approximation (RFA) of the Neural Network Gaussian Process (NNGP) kernel.
Our algorithm provides at least a 100-fold speedup over KIP and can run on a single GPU.
Our new method, termed an RFA Distillation (RFAD), performs competitively with KIP and other dataset condensation algorithms in accuracy over a range of large-scale datasets.
arXiv Detail & Related papers (2022-10-21T15:56:13Z) - Towards Automated Imbalanced Learning with Deep Hierarchical
Reinforcement Learning [57.163525407022966]
Imbalanced learning is a fundamental challenge in data mining, where there is a disproportionate ratio of training samples in each class.
Over-sampling is an effective technique to tackle imbalanced learning through generating synthetic samples for the minority class.
We propose AutoSMOTE, an automated over-sampling algorithm that can jointly optimize different levels of decisions.
arXiv Detail & Related papers (2022-08-26T04:28:01Z) - SreaMRAK a Streaming Multi-Resolution Adaptive Kernel Algorithm [60.61943386819384]
Existing implementations of KRR require that all the data is stored in the main memory.
We propose StreaMRAK - a streaming version of KRR.
We present a showcase study on two synthetic problems and the prediction of the trajectory of a double pendulum.
arXiv Detail & Related papers (2021-08-23T21:03:09Z) - A Dataset And Benchmark Of Underwater Object Detection For Robot Picking [28.971646640023284]
We introduce a dataset, Detecting Underwater Objects (DUO), and a corresponding benchmark, based on the collection and re-annotation of all relevant datasets.
DUO contains a collection of diverse underwater images with more rational annotations.
The corresponding benchmark provides indicators of both efficiency and accuracy of SOTAs for academic research and industrial applications.
arXiv Detail & Related papers (2021-06-10T11:56:19Z) - Does it work outside this benchmark? Introducing the Rigid Depth
Constructor tool, depth validation dataset construction in rigid scenes for
the masses [1.294486861344922]
We present a protocol to construct your own depth validation dataset for navigation.
RDC for Rigid Depth Constructor aims at being more accessible and cheaper than already existing techniques.
We also develop a test suite to get insightful information from the evaluated algorithm.
arXiv Detail & Related papers (2021-03-29T22:01:24Z) - Consensus Based Multi-Layer Perceptrons for Edge Computing [0.0]
Novel algorithms are required to learn from rich distributed data.
We present consensus based multi-layer perceptrons for resource-constrained devices.
arXiv Detail & Related papers (2021-02-09T18:39:46Z) - DC-NAS: Divide-and-Conquer Neural Architecture Search [108.57785531758076]
We present a divide-and-conquer (DC) approach to effectively and efficiently search deep neural architectures.
We achieve a $75.1%$ top-1 accuracy on the ImageNet dataset, which is higher than that of state-of-the-art methods using the same search space.
arXiv Detail & Related papers (2020-05-29T09:02:16Z) - A Benchmark for Point Clouds Registration Algorithms [6.667628085623009]
Point clouds registration is a fundamental step of many point clouds processing pipelines.
Most algorithms are tested on data that are collected ad-hoc and not shared with the research community.
This work aims at encouraging authors to use a public and shared benchmark, instead of data collected ad-hoc.
arXiv Detail & Related papers (2020-03-28T17:02:26Z) - NWPU-Crowd: A Large-Scale Benchmark for Crowd Counting and Localization [101.13851473792334]
We construct a large-scale congested crowd counting and localization dataset, NWPU-Crowd, consisting of 5,109 images, in a total of 2,133,375 annotated heads with points and boxes.
Compared with other real-world datasets, it contains various illumination scenes and has the largest density range (020,033)
We describe the data characteristics, evaluate the performance of some mainstream state-of-the-art (SOTA) methods, and analyze the new problems that arise on the new data.
arXiv Detail & Related papers (2020-01-10T09:26:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.