Bag Graph: Multiple Instance Learning using Bayesian Graph Neural
Networks
- URL: http://arxiv.org/abs/2202.11132v1
- Date: Tue, 22 Feb 2022 19:16:44 GMT
- Title: Bag Graph: Multiple Instance Learning using Bayesian Graph Neural
Networks
- Authors: Soumyasundar Pal, Antonios Valkanas, Florence Regol, Mark Coates
- Abstract summary: Multiple Instance Learning (MIL) is a weakly supervised learning problem where the aim is to assign labels to sets or bags of instances.
Recent work has shown promising results for neural network models in the MIL setting.
We consider modelling the interactions between bags using a graph and employ Graph Neural Networks (GNNs) to facilitate end-to-end learning.
- Score: 22.07812381907525
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multiple Instance Learning (MIL) is a weakly supervised learning problem
where the aim is to assign labels to sets or bags of instances, as opposed to
traditional supervised learning where each instance is assumed to be
independent and identically distributed (IID) and is to be labeled
individually. Recent work has shown promising results for neural network models
in the MIL setting. Instead of focusing on each instance, these models are
trained in an end-to-end fashion to learn effective bag-level representations
by suitably combining permutation invariant pooling techniques with neural
architectures. In this paper, we consider modelling the interactions between
bags using a graph and employ Graph Neural Networks (GNNs) to facilitate
end-to-end learning. Since a meaningful graph representing dependencies between
bags is rarely available, we propose to use a Bayesian GNN framework that can
generate a likely graph structure for scenarios where there is uncertainty in
the graph or when no graph is available. Empirical results demonstrate the
efficacy of the proposed technique for several MIL benchmark tasks and a
distribution regression task.
Related papers
- GNN-LoFI: a Novel Graph Neural Network through Localized Feature-based
Histogram Intersection [51.608147732998994]
Graph neural networks are increasingly becoming the framework of choice for graph-based machine learning.
We propose a new graph neural network architecture that substitutes classical message passing with an analysis of the local distribution of node features.
arXiv Detail & Related papers (2024-01-17T13:04:23Z) - Two-level Graph Network for Few-Shot Class-Incremental Learning [7.815043173207539]
Few-shot class-incremental learning (FSCIL) aims to design machine learning algorithms that can continually learn new concepts from a few data points.
Existing FSCIL methods ignore the semantic relationships between sample-level and class-level.
In this paper, we designed a two-level graph network for FSCIL named Sample-level and Class-level Graph Neural Network (SCGN)
arXiv Detail & Related papers (2023-03-24T08:58:08Z) - Dynamic Graph Message Passing Networks for Visual Recognition [112.49513303433606]
Modelling long-range dependencies is critical for scene understanding tasks in computer vision.
A fully-connected graph is beneficial for such modelling, but its computational overhead is prohibitive.
We propose a dynamic graph message passing network, that significantly reduces the computational complexity.
arXiv Detail & Related papers (2022-09-20T14:41:37Z) - Similarity-aware Positive Instance Sampling for Graph Contrastive
Pre-training [82.68805025636165]
We propose to select positive graph instances directly from existing graphs in the training set.
Our selection is based on certain domain-specific pair-wise similarity measurements.
Besides, we develop an adaptive node-level pre-training method to dynamically mask nodes to distribute them evenly in the graph.
arXiv Detail & Related papers (2022-06-23T20:12:51Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Graph Neural Network with Curriculum Learning for Imbalanced Node
Classification [21.085314408929058]
Graph Neural Network (GNN) is an emerging technique for graph-based learning tasks such as node classification.
In this work, we reveal the vulnerability of GNN to the imbalance of node labels.
We propose a novel graph neural network framework with curriculum learning (GNN-CL) consisting of two modules.
arXiv Detail & Related papers (2022-02-05T10:46:11Z) - Graph-Based Neural Network Models with Multiple Self-Supervised
Auxiliary Tasks [79.28094304325116]
Graph Convolutional Networks are among the most promising approaches for capturing relationships among structured data points.
We propose three novel self-supervised auxiliary tasks to train graph-based neural network models in a multi-task fashion.
arXiv Detail & Related papers (2020-11-14T11:09:51Z) - Connecting the Dots: Multivariate Time Series Forecasting with Graph
Neural Networks [91.65637773358347]
We propose a general graph neural network framework designed specifically for multivariate time series data.
Our approach automatically extracts the uni-directed relations among variables through a graph learning module.
Our proposed model outperforms the state-of-the-art baseline methods on 3 of 4 benchmark datasets.
arXiv Detail & Related papers (2020-05-24T04:02:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.