Exploring Modulated Detection Transformer as a Tool for Action
Recognition in Videos
- URL: http://arxiv.org/abs/2209.10126v1
- Date: Wed, 21 Sep 2022 05:19:39 GMT
- Title: Exploring Modulated Detection Transformer as a Tool for Action
Recognition in Videos
- Authors: Tom\'as Crisol, Joel Ermantraut, Adri\'an Rostagno, Santiago L. Aggio,
Javier Iparraguirre
- Abstract summary: Modulated Detection Transformer (MDETR) is an end-to-end multi-modal understanding model.
We show that it is possible to use a multi-modal model to tackle a task that it was not designed for.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: During recent years transformers architectures have been growing in
popularity. Modulated Detection Transformer (MDETR) is an end-to-end
multi-modal understanding model that performs tasks such as phase grounding,
referring expression comprehension, referring expression segmentation, and
visual question answering. One remarkable aspect of the model is the capacity
to infer over classes that it was not previously trained for. In this work we
explore the use of MDETR in a new task, action detection, without any previous
training. We obtain quantitative results using the Atomic Visual Actions
dataset. Although the model does not report the best performance in the task,
we believe that it is an interesting finding. We show that it is possible to
use a multi-modal model to tackle a task that it was not designed for. Finally,
we believe that this line of research may lead into the generalization of MDETR
in additional downstream tasks.
Related papers
- MTP: Advancing Remote Sensing Foundation Model via Multi-Task Pretraining [73.81862342673894]
Foundation models have reshaped the landscape of Remote Sensing (RS) by enhancing various image interpretation tasks.
transferring the pretrained models to downstream tasks may encounter task discrepancy due to their formulation of pretraining as image classification or object discrimination tasks.
We conduct multi-task supervised pretraining on the SAMRS dataset, encompassing semantic segmentation, instance segmentation, and rotated object detection.
Our models are finetuned on various RS downstream tasks, such as scene classification, horizontal and rotated object detection, semantic segmentation, and change detection.
arXiv Detail & Related papers (2024-03-20T09:17:22Z) - Contrastive Learning for Multi-Object Tracking with Transformers [79.61791059432558]
We show how DETR can be turned into a MOT model by employing an instance-level contrastive loss.
Our training scheme learns object appearances while preserving detection capabilities and with little overhead.
Its performance surpasses the previous state-of-the-art by +2.6 mMOTA on the challenging BDD100K dataset.
arXiv Detail & Related papers (2023-11-14T10:07:52Z) - Pre-train, Adapt and Detect: Multi-Task Adapter Tuning for Camouflaged
Object Detection [38.5505943598037]
We propose a novel pre-train, adapt and detect' paradigm to detect camouflaged objects.
By introducing a large pre-trained model, abundant knowledge learned from massive multi-modal data can be directly transferred to COD.
Our method outperforms existing state-of-the-art COD models by large margins.
arXiv Detail & Related papers (2023-07-20T08:25:38Z) - SSMTL++: Revisiting Self-Supervised Multi-Task Learning for Video
Anomaly Detection [108.57862846523858]
We revisit the self-supervised multi-task learning framework, proposing several updates to the original method.
We modernize the 3D convolutional backbone by introducing multi-head self-attention modules.
In our attempt to further improve the model, we study additional self-supervised learning tasks, such as predicting segmentation maps.
arXiv Detail & Related papers (2022-07-16T19:25:41Z) - Scaling Novel Object Detection with Weakly Supervised Detection
Transformers [21.219817483091166]
We propose the Weakly Supervised Detection Transformer, which enables efficient knowledge transfer from a large-scale pretraining dataset to WSOD finetuning.
Our experiments show that our approach outperforms previous state-of-the-art models on large-scale novel object detection datasets.
arXiv Detail & Related papers (2022-07-11T21:45:54Z) - MulT: An End-to-End Multitask Learning Transformer [66.52419626048115]
We propose an end-to-end Multitask Learning Transformer framework, named MulT, to simultaneously learn multiple high-level vision tasks.
Our framework encodes the input image into a shared representation and makes predictions for each vision task using task-specific transformer-based decoder heads.
arXiv Detail & Related papers (2022-05-17T13:03:18Z) - Multitask Adaptation by Retrospective Exploration with Learned World
Models [77.34726150561087]
We propose a meta-learned addressing model called RAMa that provides training samples for the MBRL agent taken from task-agnostic storage.
The model is trained to maximize the expected agent's performance by selecting promising trajectories solving prior tasks from the storage.
arXiv Detail & Related papers (2021-10-25T20:02:57Z) - Visual Saliency Transformer [127.33678448761599]
We develop a novel unified model based on a pure transformer, Visual Saliency Transformer (VST), for both RGB and RGB-D salient object detection (SOD)
It takes image patches as inputs and leverages the transformer to propagate global contexts among image patches.
Experimental results show that our model outperforms existing state-of-the-art results on both RGB and RGB-D SOD benchmark datasets.
arXiv Detail & Related papers (2021-04-25T08:24:06Z) - MM-FSOD: Meta and metric integrated few-shot object detection [14.631208179789583]
We present an effective object detection framework (MM-FSOD) that integrates metric learning and meta-learning.
Our model is a class-agnostic detection model that can accurately recognize new categories, which are not appearing in training samples.
arXiv Detail & Related papers (2020-12-30T14:02:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.