AnimalTrack: A Large-scale Benchmark for Multi-Animal Tracking in the
Wild
- URL: http://arxiv.org/abs/2205.00158v1
- Date: Sat, 30 Apr 2022 04:23:59 GMT
- Title: AnimalTrack: A Large-scale Benchmark for Multi-Animal Tracking in the
Wild
- Authors: Libo Zhang, Junyuan Gao, Zhen Xiao, Heng Fan
- Abstract summary: We introduce AnimalTrack, a large-scale benchmark for multi-animal tracking in the wild.
AnimalTrack consists of 58 sequences from a diverse selection of 10 common animal categories.
We extensively evaluate 14 state-of-the-art representative trackers.
- Score: 26.794672185860538
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-animal tracking (MAT), a multi-object tracking (MOT) problem, is
crucial for animal motion and behavior analysis and has many crucial
applications such as biology, ecology, animal conservation and so forth.
Despite its importance, MAT is largely under-explored compared to other MOT
problems such as multi-human tracking due to the scarcity of large-scale
benchmark. To address this problem, we introduce AnimalTrack, a large-scale
benchmark for multi-animal tracking in the wild. Specifically, AnimalTrack
consists of 58 sequences from a diverse selection of 10 common animal
categories. On average, each sequence comprises of 33 target objects for
tracking. In order to ensure high quality, every frame in AnimalTrack is
manually labeled with careful inspection and refinement. To our best knowledge,
AnimalTrack is the first benchmark dedicated to multi-animal tracking. In
addition, to understand how existing MOT algorithms perform on AnimalTrack and
provide baselines for future comparison, we extensively evaluate 14
state-of-the-art representative trackers. The evaluation results demonstrate
that, not surprisingly, most of these trackers become degenerated due to the
differences between pedestrians and animals in various aspects (e.g., pose,
motion, appearance, etc), and more efforts are desired to improve multi-animal
tracking. We hope that AnimalTrack together with evaluation and analysis will
foster further progress on multi-animal tracking. The dataset and evaluation as
well as our analysis will be made available upon the acceptance.
Related papers
- TrackMe:A Simple and Effective Multiple Object Tracking Annotation Tool [5.102727104196738]
Recent state-of-the-art tracking methods are founded on deep learning architectures for object detection, appearance feature extraction and track association.
To perform on the animal, there is a need to create large datasets of different types in multiple conditions.
In this work, we renovate the well-known tool, LabelMe, so as to assist common user with or without in-depth knowledge about computer science to annotate the data with less effort.
arXiv Detail & Related papers (2024-10-20T21:57:25Z) - APTv2: Benchmarking Animal Pose Estimation and Tracking with a
Large-scale Dataset and Beyond [27.50166679588048]
APTv2 is the pioneering large-scale benchmark for animal pose estimation and tracking.
It comprises 2,749 video clips filtered and collected from 30 distinct animal species.
We provide high-quality keypoint and tracking annotations for a total of 84,611 animal instances.
arXiv Detail & Related papers (2023-12-25T04:49:49Z) - Iterative Scale-Up ExpansionIoU and Deep Features Association for
Multi-Object Tracking in Sports [26.33239898091364]
We propose a novel online and robust multi-object tracking approach named deep ExpansionIoU (Deep-EIoU) for sports scenarios.
Unlike conventional methods, we abandon the use of the Kalman filter and leverage the iterative scale-up ExpansionIoU and deep features for robust tracking in sports scenarios.
Our proposed method demonstrates remarkable effectiveness in tracking irregular motion objects, achieving a score of 77.2% on the SportsMOT dataset and 85.4% on the SoccerNet-Tracking dataset.
arXiv Detail & Related papers (2023-06-22T17:47:08Z) - OmniTracker: Unifying Object Tracking by Tracking-with-Detection [119.51012668709502]
OmniTracker is presented to resolve all the tracking tasks with a fully shared network architecture, model weights, and inference pipeline.
Experiments on 7 tracking datasets, including LaSOT, TrackingNet, DAVIS16-17, MOT17, MOTS20, and YTVIS19, demonstrate that OmniTracker achieves on-par or even better results than both task-specific and unified tracking models.
arXiv Detail & Related papers (2023-03-21T17:59:57Z) - APT-36K: A Large-scale Benchmark for Animal Pose Estimation and Tracking [77.87449881852062]
APT-36K is the first large-scale benchmark for animal pose estimation and tracking.
It consists of 2,400 video clips collected and filtered from 30 animal species with 15 frames for each video, resulting in 36,000 frames in total.
We benchmark several representative models on the following three tracks: (1) supervised animal pose estimation on a single frame under intra- and inter-domain transfer learning settings, (2) inter-species domain generalization test for unseen animals, and (3) animal pose estimation with animal tracking.
arXiv Detail & Related papers (2022-06-12T07:18:36Z) - Single Object Tracking Research: A Survey [44.24280758718638]
This paper presents the rationale and works of two most popular tracking frameworks in past ten years.
We present some deep learning based tracking methods categorized by different network structures.
We also introduce some classical strategies for handling the challenges in tracking problem.
arXiv Detail & Related papers (2022-04-25T02:59:15Z) - DanceTrack: Multi-Object Tracking in Uniform Appearance and Diverse
Motion [56.1428110894411]
We propose a large-scale dataset for multi-human tracking, where humans have similar appearance, diverse motion and extreme articulation.
As the dataset contains mostly group dancing videos, we name it "DanceTrack"
We benchmark several state-of-the-art trackers on our dataset and observe a significant performance drop on DanceTrack when compared against existing benchmarks.
arXiv Detail & Related papers (2021-11-29T16:49:06Z) - Track to Detect and Segment: An Online Multi-Object Tracker [81.15608245513208]
TraDeS is an online joint detection and tracking model, exploiting tracking clues to assist detection end-to-end.
TraDeS infers object tracking offset by a cost volume, which is used to propagate previous object features.
arXiv Detail & Related papers (2021-03-16T02:34:06Z) - MOTChallenge: A Benchmark for Single-Camera Multiple Target Tracking [72.76685780516371]
We present MOTChallenge, a benchmark for single-camera Multiple Object Tracking (MOT)
The benchmark is focused on multiple people tracking, since pedestrians are by far the most studied object in the tracking community.
We provide a categorization of state-of-the-art trackers and a broad error analysis.
arXiv Detail & Related papers (2020-10-15T06:52:16Z) - TAO: A Large-Scale Benchmark for Tracking Any Object [95.87310116010185]
Tracking Any Object dataset consists of 2,907 high resolution videos, captured in diverse environments, which are half a minute long on average.
We ask annotators to label objects that move at any point in the video, and give names to them post factum.
Our vocabulary is both significantly larger and qualitatively different from existing tracking datasets.
arXiv Detail & Related papers (2020-05-20T21:07:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.