Multi-Object Tracking and Segmentation via Neural Message Passing
- URL: http://arxiv.org/abs/2207.07454v1
- Date: Fri, 15 Jul 2022 13:03:47 GMT
- Title: Multi-Object Tracking and Segmentation via Neural Message Passing
- Authors: Guillem Braso, Orcun Cetintas, Laura Leal-Taixe
- Abstract summary: Graphs offer a natural way to formulate Multiple Object Tracking (MOT) and Multiple Object Tracking and (MOTS)
We exploit the classical network flow formulation of MOT to define a fully differentiable framework based on Message Passing Networks (MPNs)
We achieve state-of-the-art results for both tracking and segmentation in several publicly available datasets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graphs offer a natural way to formulate Multiple Object Tracking (MOT) and
Multiple Object Tracking and Segmentation (MOTS) within the
tracking-by-detection paradigm. However, they also introduce a major challenge
for learning methods, as defining a model that can operate on such structured
domain is not trivial. In this work, we exploit the classical network flow
formulation of MOT to define a fully differentiable framework based on Message
Passing Networks (MPNs). By operating directly on the graph domain, our method
can reason globally over an entire set of detections and exploit contextual
features. It then jointly predicts both final solutions for the data
association problem and segmentation masks for all objects in the scene while
exploiting synergies between the two tasks. We achieve state-of-the-art results
for both tracking and segmentation in several publicly available datasets. Our
code is available at github.com/ocetintas/MPNTrackSeg.
Related papers
- Matching Anything by Segmenting Anything [109.2507425045143]
We propose MASA, a novel method for robust instance association learning.
MASA learns instance-level correspondence through exhaustive data transformations.
We show that MASA achieves even better performance than state-of-the-art methods trained with fully annotated in-domain video sequences.
arXiv Detail & Related papers (2024-06-06T16:20:07Z) - End-to-end Tracking with a Multi-query Transformer [96.13468602635082]
Multiple-object tracking (MOT) is a challenging task that requires simultaneous reasoning about location, appearance, and identity of the objects in the scene over time.
Our aim in this paper is to move beyond tracking-by-detection approaches, to class-agnostic tracking that performs well also for unknown object classes.
arXiv Detail & Related papers (2022-10-26T10:19:37Z) - BURST: A Benchmark for Unifying Object Recognition, Segmentation and
Tracking in Video [58.71785546245467]
Multiple existing benchmarks involve tracking and segmenting objects in video.
There is little interaction between them due to the use of disparate benchmark datasets and metrics.
We propose BURST, a dataset which contains thousands of diverse videos with high-quality object masks.
All tasks are evaluated using the same data and comparable metrics, which enables researchers to consider them in unison.
arXiv Detail & Related papers (2022-09-25T01:27:35Z) - Unified Transformer Tracker for Object Tracking [58.65901124158068]
We present the Unified Transformer Tracker (UTT) to address tracking problems in different scenarios with one paradigm.
A track transformer is developed in our UTT to track the target in both Single Object Tracking (SOT) and Multiple Object Tracking (MOT)
arXiv Detail & Related papers (2022-03-29T01:38:49Z) - Prototypical Cross-Attention Networks for Multiple Object Tracking and
Segmentation [95.74244714914052]
Multiple object tracking and segmentation requires detecting, tracking, and segmenting objects belonging to a set of given classes.
We propose Prototypical Cross-Attention Network (PCAN), capable of leveraging rich-temporal information online.
PCAN outperforms current video instance tracking and segmentation competition winners on Youtube-VIS and BDD100K datasets.
arXiv Detail & Related papers (2021-06-22T17:57:24Z) - Target-Aware Object Discovery and Association for Unsupervised Video
Multi-Object Segmentation [79.6596425920849]
This paper addresses the task of unsupervised video multi-object segmentation.
We introduce a novel approach for more accurate and efficient unseen-temporal segmentation.
We evaluate the proposed approach on DAVIS$_17$ and YouTube-VIS, and the results demonstrate that it outperforms state-of-the-art methods both in segmentation accuracy and inference speed.
arXiv Detail & Related papers (2021-04-10T14:39:44Z) - Global Correlation Network: End-to-End Joint Multi-Object Detection and
Tracking [2.749204052800622]
We present a novel network to realize joint multi-object detection and tracking in an end-to-end way, called Global Correlation Network (GCNet)
GCNet introduces the global correlation layer for regression of absolute size and coordinates of bounding boxes instead of offsets prediction.
The pipeline of detection and tracking by GCNet is conceptually simple, which does not need non-maximum suppression, data association, and other complicated tracking strategies.
arXiv Detail & Related papers (2021-03-23T13:16:42Z) - End-to-End Multi-Object Tracking with Global Response Map [23.755882375664875]
We present a completely end-to-end approach that takes image-sequence/video as input and outputs directly the located and tracked objects of learned types.
Specifically, with our introduced multi-object representation strategy, a global response map can be accurately generated over frames.
Experimental results based on the MOT16 and MOT17 benchmarks show that our proposed on-line tracker achieved state-of-the-art performance on several tracking metrics.
arXiv Detail & Related papers (2020-07-13T12:30:49Z) - Joint Object Detection and Multi-Object Tracking with Graph Neural
Networks [32.1359455541169]
We propose a new instance of joint MOT approach based on Graph Neural Networks (GNNs)
We show the effectiveness of our GNN-based joint MOT approach and show state-of-the-art performance for both detection and MOT tasks.
arXiv Detail & Related papers (2020-06-23T17:07:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.