Attention Awareness Multiple Instance Neural Network
- URL: http://arxiv.org/abs/2205.13750v1
- Date: Fri, 27 May 2022 03:29:17 GMT
- Title: Attention Awareness Multiple Instance Neural Network
- Authors: Jingjun Yi and Beichen Zhou
- Abstract summary: We propose an attention awareness multiple instance neural network framework.
It consists of an instance-level classifier, a trainable MIL pooling operator based on spatial attention and a bag-level classification layer.
Exhaustive experiments on a series of pattern recognition tasks demonstrate that our framework outperforms many state-of-the-art MIL methods.
- Score: 4.061135251278187
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multiple instance learning is qualified for many pattern recognition tasks
with weakly annotated data. The combination of artificial neural network and
multiple instance learning offers an end-to-end solution and has been widely
utilized. However, challenges remain in two-folds. Firstly, current MIL pooling
operators are usually pre-defined and lack flexibility to mine key instances.
Secondly, in current solutions, the bag-level representation can be inaccurate
or inaccessible. To this end, we propose an attention awareness multiple
instance neural network framework in this paper. It consists of an
instance-level classifier, a trainable MIL pooling operator based on spatial
attention and a bag-level classification layer. Exhaustive experiments on a
series of pattern recognition tasks demonstrate that our framework outperforms
many state-of-the-art MIL methods and validates the effectiveness of our
proposed attention MIL pooling operators.
Related papers
- Multiple Instance Verification [11.027466339522777]
We show that naive adaptations of attention-based multiple instance learning methods and standard verification methods are unsuitable for this setting.
Under the CAP framework, we propose two novel attention functions to address the challenge of distinguishing between highly similar instances in a target bag.
arXiv Detail & Related papers (2024-07-09T04:51:22Z) - Rethinking Attention-Based Multiple Instance Learning for Whole-Slide Pathological Image Classification: An Instance Attribute Viewpoint [11.09441191807822]
Multiple instance learning (MIL) is a robust paradigm for whole-slide pathological image (WSI) analysis.
This paper proposes an Attribute-Driven MIL (AttriMIL) framework to address these issues.
arXiv Detail & Related papers (2024-03-30T13:04:46Z) - Neuro-mimetic Task-free Unsupervised Online Learning with Continual
Self-Organizing Maps [56.827895559823126]
Self-organizing map (SOM) is a neural model often used in clustering and dimensionality reduction.
We propose a generalization of the SOM, the continual SOM, which is capable of online unsupervised learning under a low memory budget.
Our results, on benchmarks including MNIST, Kuzushiji-MNIST, and Fashion-MNIST, show almost a two times increase in accuracy.
arXiv Detail & Related papers (2024-02-19T19:11:22Z) - Multiple Instance Learning Framework with Masked Hard Instance Mining
for Whole Slide Image Classification [11.996318969699296]
Masked hard instance mining (MHIM-MIL) is presented.
MHIM-MIL uses a Siamese structure (Teacher-Student) with a consistency constraint to explore potential hard instances.
Experimental results on the CAMELYON-16 and TCGA Lung Cancer datasets demonstrate that MHIM-MIL outperforms other latest methods in terms of performance and training cost.
arXiv Detail & Related papers (2023-07-28T01:40:04Z) - Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks [69.38572074372392]
We present the first results proving that feature learning occurs during training with a nonlinear model on multiple tasks.
Our key insight is that multi-task pretraining induces a pseudo-contrastive loss that favors representations that align points that typically have the same label across tasks.
arXiv Detail & Related papers (2023-07-13T16:39:08Z) - A Multi-label Continual Learning Framework to Scale Deep Learning
Approaches for Packaging Equipment Monitoring [57.5099555438223]
We study multi-label classification in the continual scenario for the first time.
We propose an efficient approach that has a logarithmic complexity with regard to the number of tasks.
We validate our approach on a real-world multi-label Forecasting problem from the packaging industry.
arXiv Detail & Related papers (2022-08-08T15:58:39Z) - Dual-stream Maximum Self-attention Multi-instance Learning [11.685285490589981]
Multi-instance learning (MIL) is a form of weakly supervised learning where a single class label is assigned to a bag of instances while the instance-level labels are not available.
We propose a dual-stream maximum self-attention MIL model (DSMIL) parameterized by neural networks.
Our method achieves superior performance compared to the best MIL methods and demonstrates state-of-the-art performance on benchmark MIL datasets.
arXiv Detail & Related papers (2020-06-09T22:44:58Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z) - A Unified Object Motion and Affinity Model for Online Multi-Object
Tracking [127.5229859255719]
We propose a novel MOT framework that unifies object motion and affinity model into a single network, named UMA.
UMA integrates single object tracking and metric learning into a unified triplet network by means of multi-task learning.
We equip our model with a task-specific attention module, which is used to boost task-aware feature learning.
arXiv Detail & Related papers (2020-03-25T09:36:43Z) - Aggregated Learning: A Vector-Quantization Approach to Learning Neural
Network Classifiers [48.11796810425477]
We show that IB learning is, in fact, equivalent to a special class of the quantization problem.
We propose a novel learning framework, "Aggregated Learning", for classification with neural network models.
The effectiveness of this framework is verified through extensive experiments on standard image recognition and text classification tasks.
arXiv Detail & Related papers (2020-01-12T16:22:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.