MirrorNet: Bio-Inspired Camouflaged Object Segmentation
- URL: http://arxiv.org/abs/2007.12881v3
- Date: Thu, 11 Mar 2021 04:39:35 GMT
- Title: MirrorNet: Bio-Inspired Camouflaged Object Segmentation
- Authors: Jinnan Yan, Trung-Nghia Le, Khanh-Duy Nguyen, Minh-Triet Tran,
Thanh-Toan Do, Tam V. Nguyen
- Abstract summary: We propose a novel bio-inspired network, named the MirrorNet, that leverages both instance segmentation and mirror stream for camouflaged object segmentation.
Our proposed method achieves 89% in accuracy, outperforming the state-of-the-arts.
- Score: 38.216215543631485
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Camouflaged objects are generally difficult to be detected in their natural
environment even for human beings. In this paper, we propose a novel
bio-inspired network, named the MirrorNet, that leverages both instance
segmentation and mirror stream for the camouflaged object segmentation.
Differently from existing networks for segmentation, our proposed network
possesses two segmentation streams: the main stream and the mirror stream
corresponding with the original image and its flipped image, respectively. The
output from the mirror stream is then fused into the main stream's result for
the final camouflage map to boost up the segmentation accuracy. Extensive
experiments conducted on the public CAMO dataset demonstrate the effectiveness
of our proposed network. Our proposed method achieves 89% in accuracy,
outperforming the state-of-the-arts.
Project Page: https://sites.google.com/view/ltnghia/research/camo
Related papers
- SAM2-UNet: Segment Anything 2 Makes Strong Encoder for Natural and Medical Image Segmentation [51.90445260276897]
We prove that the Segment Anything Model 2 (SAM2) can be a strong encoder for U-shaped segmentation models.
We propose a simple but effective framework, termed SAM2-UNet, for versatile image segmentation.
arXiv Detail & Related papers (2024-08-16T17:55:38Z) - LAC-Net: Linear-Fusion Attention-Guided Convolutional Network for Accurate Robotic Grasping Under the Occlusion [79.22197702626542]
This paper introduces a framework that explores amodal segmentation for robotic grasping in cluttered scenes.
We propose a Linear-fusion Attention-guided Convolutional Network (LAC-Net)
The results on different datasets show that our method achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-08-06T14:50:48Z) - COMNet: Co-Occurrent Matching for Weakly Supervised Semantic
Segmentation [13.244183864948848]
We propose a novel Co-Occurrent Matching Network (COMNet), which can promote the quality of the CAMs and enforce the network to pay attention to the entire parts of objects.
Specifically, we perform inter-matching on paired images that contain common classes to enhance the corresponded areas, and construct intra-matching on a single image to propagate the semantic features across the object regions.
The experiments on the Pascal VOC 2012 and MS-COCO datasets show that our network can effectively boost the performance of the baseline model and achieve new state-of-the-art performance.
arXiv Detail & Related papers (2023-09-29T03:55:24Z) - Guess What Moves: Unsupervised Video and Image Segmentation by
Anticipating Motion [92.80981308407098]
We propose an approach that combines the strengths of motion-based and appearance-based segmentation.
We propose to supervise an image segmentation network, tasking it with predicting regions that are likely to contain simple motion patterns.
In the unsupervised video segmentation mode, the network is trained on a collection of unlabelled videos, using the learning process itself as an algorithm to segment these videos.
arXiv Detail & Related papers (2022-05-16T17:55:34Z) - Residual Moment Loss for Medical Image Segmentation [56.72261489147506]
Location information is proven to benefit the deep learning models on capturing the manifold structure of target objects.
Most existing methods encode the location information in an implicit way, for the network to learn.
We propose a novel loss function, namely residual moment (RM) loss, to explicitly embed the location information of segmentation targets.
arXiv Detail & Related papers (2021-06-27T09:31:49Z) - Anabranch Network for Camouflaged Object Segmentation [23.956327305907585]
This paper explores the camouflaged object segmentation problem, namely, segmenting the camouflaged object(s) for a given image.
To address this problem, we provide a new image dataset of camouflaged objects for benchmarking purposes.
In addition, we propose a general end-to-end network, called the Anabranch Network, that leverages both classification and segmentation tasks.
arXiv Detail & Related papers (2021-05-20T01:52:44Z) - A Novel Adaptive Deep Network for Building Footprint Segmentation [0.0]
We propose a novel network-based on Pix2Pix methodology to solve the problem of inaccurate boundaries obtained by converting satellite images into maps.
Our framework includes two generators where the first generator extracts localization features in order to merge them with the boundary features extracted from the second generator to segment all detailed building edges.
Different strategies are implemented to enhance the quality of the proposed networks' results, implying that the proposed network outperforms state-of-the-art networks in segmentation accuracy with a large margin for all evaluation metrics.
arXiv Detail & Related papers (2021-02-27T18:13:48Z) - Bidirectional Multi-scale Attention Networks for Semantic Segmentation
of Oblique UAV Imagery [30.524771772192757]
We propose the novel bidirectional multi-scale attention networks, which fuse features from multiple scales bidirectionally for more adaptive and effective feature extraction.
Our model achieved the state-of-the-art (SOTA) result with a mean intersection over union (mIoU) score of 70.80%.
arXiv Detail & Related papers (2021-02-05T11:02:15Z) - CRNet: Cross-Reference Networks for Few-Shot Segmentation [59.85183776573642]
Few-shot segmentation aims to learn a segmentation model that can be generalized to novel classes with only a few training images.
With a cross-reference mechanism, our network can better find the co-occurrent objects in the two images.
Experiments on the PASCAL VOC 2012 dataset show that our network achieves state-of-the-art performance.
arXiv Detail & Related papers (2020-03-24T04:55:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.