Graph Relation Distillation for Efficient Biomedical Instance
Segmentation
- URL: http://arxiv.org/abs/2401.06370v1
- Date: Fri, 12 Jan 2024 04:41:23 GMT
- Title: Graph Relation Distillation for Efficient Biomedical Instance
Segmentation
- Authors: Xiaoyu Liu, Yueyi Zhang, Zhiwei Xiong, Wei Huang, Bo Hu, Xiaoyan Sun,
Feng Wu
- Abstract summary: We propose a graph relation distillation approach for efficient biomedical instance segmentation.
We introduce two graph distillation schemes deployed at both the intra-image level and the inter-image level.
Experimental results on a number of biomedical datasets validate the effectiveness of our approach.
- Score: 80.51124447333493
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Instance-aware embeddings predicted by deep neural networks have
revolutionized biomedical instance segmentation, but its resource requirements
are substantial. Knowledge distillation offers a solution by transferring
distilled knowledge from heavy teacher networks to lightweight yet
high-performance student networks. However, existing knowledge distillation
methods struggle to extract knowledge for distinguishing instances and overlook
global relation information. To address these challenges, we propose a graph
relation distillation approach for efficient biomedical instance segmentation,
which considers three essential types of knowledge: instance-level features,
instance relations, and pixel-level boundaries. We introduce two graph
distillation schemes deployed at both the intra-image level and the inter-image
level: instance graph distillation (IGD) and affinity graph distillation (AGD).
IGD constructs a graph representing instance features and relations,
transferring these two types of knowledge by enforcing instance graph
consistency. AGD constructs an affinity graph representing pixel relations to
capture structured knowledge of instance boundaries, transferring
boundary-related knowledge by ensuring pixel affinity consistency. Experimental
results on a number of biomedical datasets validate the effectiveness of our
approach, enabling student models with less than $ 1\%$ parameters and less
than $10\%$ inference time while achieving promising performance compared to
teacher models.
Related papers
- Exploring Graph-based Knowledge: Multi-Level Feature Distillation via Channels Relational Graph [8.646512035461994]
In visual tasks, large teacher models capture essential features and deep information, enhancing performance.
We propose a distillation framework based on graph knowledge, including a multi-level feature alignment strategy.
We emphasize spectral embedding (SE) as a key technique in our distillation process, which merges the student's feature space with the relational knowledge and structural complexities similar to the teacher network.
arXiv Detail & Related papers (2024-05-14T12:37:05Z) - Visual Commonsense based Heterogeneous Graph Contrastive Learning [79.22206720896664]
We propose a heterogeneous graph contrastive learning method to better finish the visual reasoning task.
Our method is designed as a plug-and-play way, so that it can be quickly and easily combined with a wide range of representative methods.
arXiv Detail & Related papers (2023-11-11T12:01:18Z) - Graph Relation Aware Continual Learning [3.908470250825618]
Continual graph learning (CGL) studies the problem of learning from an infinite stream of graph data.
We design a relation-aware adaptive model, dubbed as RAM-CG, that consists of a relation-discovery modular to explore latent relations behind edges.
RAM-CG provides significant 2.2%, 6.9% and 6.6% accuracy improvements over the state-of-the-art results on CitationNet, OGBN-arxiv and TWITCH dataset.
arXiv Detail & Related papers (2023-08-16T09:53:20Z) - Knowledge Distillation via Token-level Relationship Graph [12.356770685214498]
We propose a novel method called Knowledge Distillation with Token-level Relationship Graph (TRG)
By employing TRG, the student model can effectively emulate higher-level semantic information from the teacher model.
We conduct experiments to evaluate the effectiveness of the proposed method against several state-of-the-art approaches.
arXiv Detail & Related papers (2023-06-20T08:16:37Z) - Graph-based Knowledge Distillation: A survey and experimental evaluation [4.713436329217004]
Knowledge Distillation (KD) has been introduced to enhance existing Graph Neural Networks (GNNs)
KD involves transferring the soft-label supervision of the large teacher model to the small student model while maintaining prediction performance.
This paper first introduces the background of graph and KD. It then provides a comprehensive summary of three types of Graph-based Knowledge Distillation methods.
arXiv Detail & Related papers (2023-02-27T11:39:23Z) - Heterogeneous Graph Neural Networks using Self-supervised Reciprocally
Contrastive Learning [102.9138736545956]
Heterogeneous graph neural network (HGNN) is a very popular technique for the modeling and analysis of heterogeneous graphs.
We develop for the first time a novel and robust heterogeneous graph contrastive learning approach, namely HGCL, which introduces two views on respective guidance of node attributes and graph topologies.
In this new approach, we adopt distinct but most suitable attribute and topology fusion mechanisms in the two views, which are conducive to mining relevant information in attributes and topologies separately.
arXiv Detail & Related papers (2022-04-30T12:57:02Z) - Graph Flow: Cross-layer Graph Flow Distillation for Dual-Efficient
Medical Image Segmentation [0.76146285961466]
We propose Graph Flow, a novel comprehensive knowledge distillation method, to exploit the cross-layer graph flow knowledge for both network-efficient and annotation-efficient medical image segmentation.
In this paper, we demonstrate the prominent ability of our method which state-of-the-art performance on different-modality and multi-category medical image datasets.
arXiv Detail & Related papers (2022-03-16T14:56:02Z) - Group Contrastive Self-Supervised Learning on Graphs [101.45974132613293]
We study self-supervised learning on graphs using contrastive methods.
We argue that contrasting graphs in multiple subspaces enables graph encoders to capture more abundant characteristics.
arXiv Detail & Related papers (2021-07-20T22:09:21Z) - Structured Landmark Detection via Topology-Adapting Deep Graph Learning [75.20602712947016]
We present a new topology-adapting deep graph learning approach for accurate anatomical facial and medical landmark detection.
The proposed method constructs graph signals leveraging both local image features and global shape features.
Experiments are conducted on three public facial image datasets (WFLW, 300W, and COFW-68) as well as three real-world X-ray medical datasets (Cephalometric (public), Hand and Pelvis)
arXiv Detail & Related papers (2020-04-17T11:55:03Z) - Tensor Graph Convolutional Networks for Multi-relational and Robust
Learning [74.05478502080658]
This paper introduces a tensor-graph convolutional network (TGCN) for scalable semi-supervised learning (SSL) from data associated with a collection of graphs, that are represented by a tensor.
The proposed architecture achieves markedly improved performance relative to standard GCNs, copes with state-of-the-art adversarial attacks, and leads to remarkable SSL performance over protein-to-protein interaction networks.
arXiv Detail & Related papers (2020-03-15T02:33:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.