MFEViT: A Robust Lightweight Transformer-based Network for Multimodal
2D+3D Facial Expression Recognition
- URL: http://arxiv.org/abs/2109.13086v1
- Date: Mon, 20 Sep 2021 17:19:39 GMT
- Title: MFEViT: A Robust Lightweight Transformer-based Network for Multimodal
2D+3D Facial Expression Recognition
- Authors: Hanting Li, Mingzhe Sui, Zhaoqing Zhu, Feng Zhao
- Abstract summary: Vision transformer (ViT) has been widely applied in many areas due to its self-attention mechanism.
We propose a robust lightweight pure transformer-based network for multimodal 2D+3D FER, namely MFEViT.
Our MFEViT outperforms state-of-the-art approaches with an accuracy of 90.83% on BU-3DFE and 90.28% on Bosphorus.
- Score: 1.7448845398590227
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vision transformer (ViT) has been widely applied in many areas due to its
self-attention mechanism that help obtain the global receptive field since the
first layer. It even achieves surprising performance exceeding CNN in some
vision tasks. However, there exists an issue when leveraging vision transformer
into 2D+3D facial expression recognition (FER), i.e., ViT training needs mass
data. Nonetheless, the number of samples in public 2D+3D FER datasets is far
from sufficient for evaluation. How to utilize the ViT pre-trained on RGB
images to handle 2D+3D data becomes a challenge. To solve this problem, we
propose a robust lightweight pure transformer-based network for multimodal
2D+3D FER, namely MFEViT. For narrowing the gap between RGB and multimodal
data, we devise an alternative fusion strategy, which replaces each of the
three channels of an RGB image with the depth-map channel and fuses them before
feeding them into the transformer encoder. Moreover, the designed sample
filtering module adds several subclasses for each expression and move the noisy
samples to their corresponding subclasses, thus eliminating their disturbance
on the network during the training stage. Extensive experiments demonstrate
that our MFEViT outperforms state-of-the-art approaches with an accuracy of
90.83% on BU-3DFE and 90.28% on Bosphorus. Specifically, the proposed MFEViT is
a lightweight model, requiring much fewer parameters than multi-branch CNNs. To
the best of our knowledge, this is the first work to introduce vision
transformer into multimodal 2D+3D FER. The source code of our MFEViT will be
publicly available online.
Related papers
- EmbodiedSAM: Online Segment Any 3D Thing in Real Time [61.2321497708998]
Embodied tasks require the agent to fully understand 3D scenes simultaneously with its exploration.
An online, real-time, fine-grained and highly-generalized 3D perception model is desperately needed.
arXiv Detail & Related papers (2024-08-21T17:57:06Z) - Rethinking Vision Transformer and Masked Autoencoder in Multimodal Face
Anti-Spoofing [19.142582966452935]
We investigate three key factors (i.e., inputs, pre-training, and finetuning) in ViT for multimodal FAS with RGB, Infrared (IR), and Depth.
We propose the modality-asymmetric masked autoencoder (M$2$A$2$E) for multimodal FAS self-supervised pre-training without costly annotated labels.
arXiv Detail & Related papers (2023-02-11T17:02:34Z) - RangeViT: Towards Vision Transformers for 3D Semantic Segmentation in
Autonomous Driving [80.14669385741202]
Vision transformers (ViTs) have achieved state-of-the-art results in many image-based benchmarks.
ViTs are notoriously hard to train and require a lot of training data to learn powerful representations.
We show that our method, called RangeViT, outperforms existing projection-based methods on nuScenes and Semantic KITTI.
arXiv Detail & Related papers (2023-01-24T18:50:48Z) - MVTN: Learning Multi-View Transformations for 3D Understanding [60.15214023270087]
We introduce the Multi-View Transformation Network (MVTN), which uses differentiable rendering to determine optimal view-points for 3D shape recognition.
MVTN can be trained end-to-end with any multi-view network for 3D shape recognition.
Our approach demonstrates state-of-the-art performance in 3D classification and shape retrieval on several benchmarks.
arXiv Detail & Related papers (2022-12-27T12:09:16Z) - A Strong Transfer Baseline for RGB-D Fusion in Vision Transformers [0.0]
We propose a recipe for transferring pretrained ViTs in RGB-D domains for single-view 3D object recognition.
We show that our adapted ViTs score up to 95.1% top-1 accuracy in Washington, achieving new state-of-the-art results in this benchmark.
arXiv Detail & Related papers (2022-10-03T12:08:09Z) - Pix4Point: Image Pretrained Standard Transformers for 3D Point Cloud
Understanding [62.502694656615496]
We present Progressive Point Patch Embedding and present a new point cloud Transformer model namely PViT.
PViT shares the same backbone as Transformer but is shown to be less hungry for data, enabling Transformer to achieve performance comparable to the state-of-the-art.
We formulate a simple yet effective pipeline dubbed "Pix4Point" that allows harnessing Transformers pretrained in the image domain to enhance downstream point cloud understanding.
arXiv Detail & Related papers (2022-08-25T17:59:29Z) - CAT-Det: Contrastively Augmented Transformer for Multi-modal 3D Object
Detection [32.06145370498289]
We propose Contrastively Augmented Transformer for multi-modal 3D object Detection (CAT-Det)
CAT-Det adopts a two-stream structure consisting of a Pointformer (PT) branch, an Imageformer (IT) branch along with a Cross-Modal Transformer (CMT) module.
We propose an effective One-way Multi-modal Data Augmentation (OMDA) approach via hierarchical contrastive learning at both the point and object levels.
arXiv Detail & Related papers (2022-04-01T10:07:25Z) - Multimodal Fusion Transformer for Remote Sensing Image Classification [35.57881383390397]
Vision transformers (ViTs) have been trending in image classification tasks due to their promising performance when compared to convolutional neural networks (CNNs)
To achieve satisfactory performance, close to that of CNNs, transformers need fewer parameters.
We introduce a new multimodal fusion transformer (MFT) network which comprises a multihead cross patch attention (mCrossPA) for HSI land-cover classification.
arXiv Detail & Related papers (2022-03-31T11:18:41Z) - Coarse-to-Fine Vision Transformer [83.45020063642235]
We propose a coarse-to-fine vision transformer (CF-ViT) to relieve computational burden while retaining performance.
Our proposed CF-ViT is motivated by two important observations in modern ViT models.
Our CF-ViT reduces 53% FLOPs of LV-ViT, and also achieves 2.01x throughput.
arXiv Detail & Related papers (2022-03-08T02:57:49Z) - TerViT: An Efficient Ternary Vision Transformer [21.348788407233265]
Vision transformers (ViTs) have demonstrated great potential in various visual tasks, but suffer from expensive computational and memory cost problems when deployed on resource-constrained devices.
We introduce a ternary vision transformer (TerViT) to ternarize the weights in ViTs, which are challenged by the large loss surface gap between real-valued and ternary parameters.
arXiv Detail & Related papers (2022-01-20T08:29:19Z) - Vision Transformer with Progressive Sampling [73.60630716500154]
We propose an iterative and progressive sampling strategy to locate discriminative regions.
When trained from scratch on ImageNet, PS-ViT performs 3.8% higher than the vanilla ViT in terms of top-1 accuracy.
arXiv Detail & Related papers (2021-08-03T18:04:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.