Egocentric RGB+Depth Action Recognition in Industry-Like Settings
- URL: http://arxiv.org/abs/2309.13962v1
- Date: Mon, 25 Sep 2023 08:56:22 GMT
- Title: Egocentric RGB+Depth Action Recognition in Industry-Like Settings
- Authors: Jyoti Kini, Sarah Fleischer, Ishan Dave, Mubarak Shah
- Abstract summary: Our work focuses on recognizing actions from egocentric RGB and Depth modalities in an industry-like environment.
Our framework is based on the 3D Video SWIN Transformer to encode both RGB and Depth modalities effectively.
Our method also secured first place at the multimodal action recognition challenge at ICIAP 2023.
- Score: 50.38638300332429
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Action recognition from an egocentric viewpoint is a crucial perception task
in robotics and enables a wide range of human-robot interactions. While most
computer vision approaches prioritize the RGB camera, the Depth modality -
which can further amplify the subtleties of actions from an egocentric
perspective - remains underexplored. Our work focuses on recognizing actions
from egocentric RGB and Depth modalities in an industry-like environment. To
study this problem, we consider the recent MECCANO dataset, which provides a
wide range of assembling actions. Our framework is based on the 3D Video SWIN
Transformer to encode both RGB and Depth modalities effectively. To address the
inherent skewness in real-world multimodal action occurrences, we propose a
training strategy using an exponentially decaying variant of the focal loss
modulating factor. Additionally, to leverage the information in both RGB and
Depth modalities, we opt for late fusion to combine the predictions from each
modality. We thoroughly evaluate our method on the action recognition task of
the MECCANO dataset, and it significantly outperforms the prior work. Notably,
our method also secured first place at the multimodal action recognition
challenge at ICIAP 2023.
Related papers
- Towards Global Localization using Multi-Modal Object-Instance Re-Identification [23.764646800085977]
We propose a novel re-identification transformer architecture that integrates multimodal RGB and depth information.
We demonstrate improvements in ReID across scenes that are cluttered or have varying illumination conditions.
We also develop a ReID-based localization framework that enables accurate camera localization and pose identification across different viewpoints.
arXiv Detail & Related papers (2024-09-18T14:15:10Z) - Confidence-Aware RGB-D Face Recognition via Virtual Depth Synthesis [48.59382455101753]
2D face recognition encounters challenges in unconstrained environments due to varying illumination, occlusion, and pose.
Recent studies focus on RGB-D face recognition to improve robustness by incorporating depth information.
In this work, we first construct a diverse depth dataset generated by 3D Morphable Models for depth model pre-training.
Then, we propose a domain-independent pre-training framework that utilizes readily available pre-trained RGB and depth models to separately perform face recognition without needing additional paired data for retraining.
arXiv Detail & Related papers (2024-03-11T09:12:24Z) - A Strong Transfer Baseline for RGB-D Fusion in Vision Transformers [0.0]
We propose a recipe for transferring pretrained ViTs in RGB-D domains for single-view 3D object recognition.
We show that our adapted ViTs score up to 95.1% top-1 accuracy in Washington, achieving new state-of-the-art results in this benchmark.
arXiv Detail & Related papers (2022-10-03T12:08:09Z) - Dual Swin-Transformer based Mutual Interactive Network for RGB-D Salient
Object Detection [67.33924278729903]
In this work, we propose Dual Swin-Transformer based Mutual Interactive Network.
We adopt Swin-Transformer as the feature extractor for both RGB and depth modality to model the long-range dependencies in visual inputs.
Comprehensive experiments on five standard RGB-D SOD benchmark datasets demonstrate the superiority of the proposed DTMINet method.
arXiv Detail & Related papers (2022-06-07T08:35:41Z) - Cross-modality Discrepant Interaction Network for RGB-D Salient Object
Detection [78.47767202232298]
We propose a novel Cross-modality Discrepant Interaction Network (CDINet) for RGB-D SOD.
Two components are designed to implement the effective cross-modality interaction.
Our network outperforms $15$ state-of-the-art methods both quantitatively and qualitatively.
arXiv Detail & Related papers (2021-08-04T11:24:42Z) - Learning Selective Mutual Attention and Contrast for RGB-D Saliency
Detection [145.4919781325014]
How to effectively fuse cross-modal information is the key problem for RGB-D salient object detection.
Many models use the feature fusion strategy but are limited by the low-order point-to-point fusion methods.
We propose a novel mutual attention model by fusing attention and contexts from different modalities.
arXiv Detail & Related papers (2020-10-12T08:50:10Z) - Skeleton Focused Human Activity Recognition in RGB Video [11.521107108725188]
We propose a multimodal feature fusion model that utilizes both skeleton and RGB modalities to infer human activity.
The model could be either individually or uniformly trained by the back-propagation algorithm in an end-to-end manner.
arXiv Detail & Related papers (2020-04-29T06:40:42Z) - Towards High Performance Human Keypoint Detection [87.1034745775229]
We find that context information plays an important role in reasoning human body configuration and invisible keypoints.
Inspired by this, we propose a cascaded context mixer ( CCM) which efficiently integrates spatial and channel context information.
To maximize CCM's representation capability, we develop a hard-negative person detection mining strategy and a joint-training strategy.
We present several sub-pixel refinement techniques for postprocessing keypoint predictions to improve detection accuracy.
arXiv Detail & Related papers (2020-02-03T02:24:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.