Structured Click Control in Transformer-based Interactive Segmentation
- URL: http://arxiv.org/abs/2405.04009v1
- Date: Tue, 7 May 2024 04:57:25 GMT
- Title: Structured Click Control in Transformer-based Interactive Segmentation
- Authors: Long Xu, Yongquan Chen, Rui Huang, Feng Wu, Shiwu Lai,
- Abstract summary: We propose a structured click intent model based on graph neural networks.
The graph nodes will be aggregated to obtain structured interaction features.
The dual cross-attention will be used to inject structured interaction features into vision Transformer features.
- Score: 36.49641677493008
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Click-point-based interactive segmentation has received widespread attention due to its efficiency. However, it's hard for existing algorithms to obtain precise and robust responses after multiple clicks. In this case, the segmentation results tend to have little change or are even worse than before. To improve the robustness of the response, we propose a structured click intent model based on graph neural networks, which adaptively obtains graph nodes via the global similarity of user-clicked Transformer tokens. Then the graph nodes will be aggregated to obtain structured interaction features. Finally, the dual cross-attention will be used to inject structured interaction features into vision Transformer features, thereby enhancing the control of clicks over segmentation results. Extensive experiments demonstrated the proposed algorithm can serve as a general structure in improving Transformer-based interactive segmenta?tion performance. The code and data will be released at https://github.com/hahamyt/scc.
Related papers
- AdaRC: Mitigating Graph Structure Shifts during Test-Time [66.40525136929398]
Test-time adaptation (TTA) has attracted attention due to its ability to adapt a pre-trained model to a target domain without re-accessing the source domain.
We propose AdaRC, an innovative framework designed for effective and efficient adaptation to structure shifts in graphs.
arXiv Detail & Related papers (2024-10-09T15:15:40Z) - Scale Disparity of Instances in Interactive Point Cloud Segmentation [15.865365305312174]
We propose ClickFormer, an innovative interactive point cloud segmentation model that accurately segments instances of both thing and stuff categories.
We employ global attention in the query-voxel transformer to mitigate the risk of generating false positives.
Experiments demonstrate that ClickFormer outperforms existing interactive point cloud segmentation methods across both indoor and outdoor datasets.
arXiv Detail & Related papers (2024-07-19T03:45:48Z) - RAT: Retrieval-Augmented Transformer for Click-Through Rate Prediction [68.34355552090103]
This paper develops a Retrieval-Augmented Transformer (RAT), aiming to acquire fine-grained feature interactions within and across samples.
We then build Transformer layers with cascaded attention to capture both intra- and cross-sample feature interactions.
Experiments on real-world datasets substantiate the effectiveness of RAT and suggest its advantage in long-tail scenarios.
arXiv Detail & Related papers (2024-04-02T19:14:23Z) - Dynamic Graph Message Passing Networks for Visual Recognition [112.49513303433606]
Modelling long-range dependencies is critical for scene understanding tasks in computer vision.
A fully-connected graph is beneficial for such modelling, but its computational overhead is prohibitive.
We propose a dynamic graph message passing network, that significantly reduces the computational complexity.
arXiv Detail & Related papers (2022-09-20T14:41:37Z) - Transforming the Interactive Segmentation for Medical Imaging [34.57242805353604]
The goal of this paper is to interactively refine the automatic segmentation on challenging structures that fall behind human performance.
We propose a novel Transformer-based architecture for Interactive (TIS)
Our proposed architecture is composed of Transformer Decoder variants, which naturally fulfills feature comparison with the attention mechanisms.
arXiv Detail & Related papers (2022-08-20T03:28:23Z) - nnFormer: Interleaved Transformer for Volumetric Segmentation [50.10441845967601]
We introduce nnFormer, a powerful segmentation model with an interleaved architecture based on empirical combination of self-attention and convolution.
nnFormer achieves tremendous improvements over previous transformer-based methods on two commonly used datasets Synapse and ACDC.
arXiv Detail & Related papers (2021-09-07T17:08:24Z) - Edge-augmented Graph Transformers: Global Self-attention is Enough for
Graphs [24.796242917673755]
We propose a simple yet powerful extension to the transformer - residual edge channels.
The resultant framework, which we call Edge-augmented Graph Transformer (EGT), can directly accept, process and output structural information as well as node information.
Our framework, which relies on global node feature aggregation, achieves better performance compared to Graph Convolutional Networks (GCN)
arXiv Detail & Related papers (2021-08-07T02:18:11Z) - Relation Transformer Network [25.141472361426818]
We propose a novel transformer formulation for scene graph generation and relation prediction.
We leverage the encoder-decoder architecture of the transformer for rich feature embedding of nodes and edges.
Our relation prediction module classifies the directed relation from the learned node and edge embedding.
arXiv Detail & Related papers (2020-04-13T20:47:01Z) - FAIRS -- Soft Focus Generator and Attention for Robust Object
Segmentation from Extreme Points [70.65563691392987]
We present a new approach to generate object segmentation from user inputs in the form of extreme points and corrective clicks.
We demonstrate our method's ability to generate high-quality training data as well as its scalability in incorporating extreme points, guiding clicks, and corrective clicks in a principled manner.
arXiv Detail & Related papers (2020-04-04T22:25:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.