CMX: Cross-Modal Fusion for RGB-X Semantic Segmentation with
Transformers
- URL: http://arxiv.org/abs/2203.04838v5
- Date: Fri, 24 Nov 2023 16:29:19 GMT
- Title: CMX: Cross-Modal Fusion for RGB-X Semantic Segmentation with
Transformers
- Authors: Jiaming Zhang, Huayao Liu, Kailun Yang, Xinxin Hu, Ruiping Liu, Rainer
Stiefelhagen
- Abstract summary: We propose a unified fusion framework, CMX, for RGB-X semantic segmentation.
We use a Cross-Modal Feature Rectification Module (CM-FRM) to calibrate bi-modal features.
We unify five modalities complementary to RGB, i.e., depth, thermal, polarization, event, and LiDAR.
- Score: 36.49497394304525
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Scene understanding based on image segmentation is a crucial component of
autonomous vehicles. Pixel-wise semantic segmentation of RGB images can be
advanced by exploiting complementary features from the supplementary modality
(X-modality). However, covering a wide variety of sensors with a
modality-agnostic model remains an unresolved problem due to variations in
sensor characteristics among different modalities. Unlike previous
modality-specific methods, in this work, we propose a unified fusion framework,
CMX, for RGB-X semantic segmentation. To generalize well across different
modalities, that often include supplements as well as uncertainties, a unified
cross-modal interaction is crucial for modality fusion. Specifically, we design
a Cross-Modal Feature Rectification Module (CM-FRM) to calibrate bi-modal
features by leveraging the features from one modality to rectify the features
of the other modality. With rectified feature pairs, we deploy a Feature Fusion
Module (FFM) to perform sufficient exchange of long-range contexts before
mixing. To verify CMX, for the first time, we unify five modalities
complementary to RGB, i.e., depth, thermal, polarization, event, and LiDAR.
Extensive experiments show that CMX generalizes well to diverse multi-modal
fusion, achieving state-of-the-art performances on five RGB-Depth benchmarks,
as well as RGB-Thermal, RGB-Polarization, and RGB-LiDAR datasets. Besides, to
investigate the generalizability to dense-sparse data fusion, we establish an
RGB-Event semantic segmentation benchmark based on the EventScape dataset, on
which CMX sets the new state-of-the-art. The source code of CMX is publicly
available at https://github.com/huaaaliu/RGBX_Semantic_Segmentation.
Related papers
- SSFam: Scribble Supervised Salient Object Detection Family [13.369217449092524]
Scribble supervised salient object detection (SSSOD) constructs segmentation ability of attractive objects from surroundings under the supervision of sparse scribble labels.
For the better segmentation, depth and thermal infrared modalities serve as the supplement to RGB images in the complex scenes.
Our model demonstrates the remarkable performance among combinations of different modalities and refreshes the highest level of scribble supervised methods.
arXiv Detail & Related papers (2024-09-07T13:07:59Z) - Channel and Spatial Relation-Propagation Network for RGB-Thermal
Semantic Segmentation [10.344060599932185]
RGB-Thermal (RGB-T) semantic segmentation has shown great potential in handling low-light conditions.
The key to RGB-T semantic segmentation is to effectively leverage the complementarity nature of RGB and thermal images.
arXiv Detail & Related papers (2023-08-24T03:43:47Z) - Residual Spatial Fusion Network for RGB-Thermal Semantic Segmentation [19.41334573257174]
Traditional methods mostly use RGB images which are heavily affected by lighting conditions, eg, darkness.
Recent studies show thermal images are robust to the night scenario as a compensating modality for segmentation.
This work proposes a Residual Spatial Fusion Network (RSFNet) for RGB-T semantic segmentation.
arXiv Detail & Related papers (2023-06-17T14:28:08Z) - Dual Swin-Transformer based Mutual Interactive Network for RGB-D Salient
Object Detection [67.33924278729903]
In this work, we propose Dual Swin-Transformer based Mutual Interactive Network.
We adopt Swin-Transformer as the feature extractor for both RGB and depth modality to model the long-range dependencies in visual inputs.
Comprehensive experiments on five standard RGB-D SOD benchmark datasets demonstrate the superiority of the proposed DTMINet method.
arXiv Detail & Related papers (2022-06-07T08:35:41Z) - Transformer-based Network for RGB-D Saliency Detection [82.6665619584628]
Key to RGB-D saliency detection is to fully mine and fuse information at multiple scales across the two modalities.
We show that transformer is a uniform operation which presents great efficacy in both feature fusion and feature enhancement.
Our proposed network performs favorably against state-of-the-art RGB-D saliency detection methods.
arXiv Detail & Related papers (2021-12-01T15:53:58Z) - Self-Supervised Representation Learning for RGB-D Salient Object
Detection [93.17479956795862]
We use Self-Supervised Representation Learning to design two pretext tasks: the cross-modal auto-encoder and the depth-contour estimation.
Our pretext tasks require only a few and un RGB-D datasets to perform pre-training, which make the network capture rich semantic contexts.
For the inherent problem of cross-modal fusion in RGB-D SOD, we propose a multi-path fusion module.
arXiv Detail & Related papers (2021-01-29T09:16:06Z) - Bi-directional Cross-Modality Feature Propagation with
Separation-and-Aggregation Gate for RGB-D Semantic Segmentation [59.94819184452694]
Depth information has proven to be a useful cue in the semantic segmentation of RGBD images for providing a geometric counterpart to the RGB representation.
Most existing works simply assume that depth measurements are accurate and well-aligned with the RGB pixels and models the problem as a cross-modal feature fusion.
In this paper, we propose a unified and efficient Crossmodality Guided to not only effectively recalibrate RGB feature responses, but also to distill accurate depth information via multiple stages and aggregate the two recalibrated representations alternatively.
arXiv Detail & Related papers (2020-07-17T18:35:24Z) - RGB-D Salient Object Detection with Cross-Modality Modulation and
Selection [126.4462739820643]
We present an effective method to progressively integrate and refine the cross-modality complementarities for RGB-D salient object detection (SOD)
The proposed network mainly solves two challenging issues: 1) how to effectively integrate the complementary information from RGB image and its corresponding depth map, and 2) how to adaptively select more saliency-related features.
arXiv Detail & Related papers (2020-07-14T14:22:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.