Hyperbolic Secant representation of the logistic function: Application to probabilistic Multiple Instance Learning for CT intracranial hemorrhage detection
- URL: http://arxiv.org/abs/2403.14829v1
- Date: Thu, 21 Mar 2024 20:43:34 GMT
- Title: Hyperbolic Secant representation of the logistic function: Application to probabilistic Multiple Instance Learning for CT intracranial hemorrhage detection
- Authors: F. M. Castro-Macías, P. Morales-Álvarez, Y. Wu, R. Molina, A. K. Katsaggelos,
- Abstract summary: Multiple Instance Learning (MIL) is a weakly supervised paradigm that has been successfully applied to many different scientific areas.
We propose a general GP-based MIL method that takes different forms by simply leveraging distributions other than the Hyperbolic Secant one.
This is validated in a comprehensive experimental study including one synthetic MIL dataset, two well-known MIL benchmarks, and a real-world medical problem.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multiple Instance Learning (MIL) is a weakly supervised paradigm that has been successfully applied to many different scientific areas and is particularly well suited to medical imaging. Probabilistic MIL methods, and more specifically Gaussian Processes (GPs), have achieved excellent results due to their high expressiveness and uncertainty quantification capabilities. One of the most successful GP-based MIL methods, VGPMIL, resorts to a variational bound to handle the intractability of the logistic function. Here, we formulate VGPMIL using P\'olya-Gamma random variables. This approach yields the same variational posterior approximations as the original VGPMIL, which is a consequence of the two representations that the Hyperbolic Secant distribution admits. This leads us to propose a general GP-based MIL method that takes different forms by simply leveraging distributions other than the Hyperbolic Secant one. Using the Gamma distribution we arrive at a new approach that obtains competitive or superior predictive performance and efficiency. This is validated in a comprehensive experimental study including one synthetic MIL dataset, two well-known MIL benchmarks, and a real-world medical problem. We expect that this work provides useful ideas beyond MIL that can foster further research in the field.
Related papers
- xMIL: Insightful Explanations for Multiple Instance Learning in Histopathology [13.939494815120666]
Multiple instance learning (MIL) is an effective and widely used approach for weakly supervised machine learning.
We revisit MIL through the lens of explainable AI (XAI) and introduce xMIL, a refined framework with more general assumptions.
Our approach consistently outperforms previous explanation attempts with particularly improved faithfulness scores on challenging biomarker prediction tasks.
arXiv Detail & Related papers (2024-06-06T17:26:40Z) - A Multi-Grained Symmetric Differential Equation Model for Learning
Protein-Ligand Binding Dynamics [74.93549765488103]
In drug discovery, molecular dynamics simulation provides a powerful tool for predicting binding affinities, estimating transport properties, and exploring pocket sites.
We propose NeuralMD, the first machine learning surrogate that can facilitate numerical MD and provide accurate simulations in protein-ligand binding.
We show the efficiency and effectiveness of NeuralMD, with a 2000$times$ speedup over standard numerical MD simulation and outperforming all other ML approaches by up to 80% under the stability metric.
arXiv Detail & Related papers (2024-01-26T09:35:17Z) - Introducing instance label correlation in multiple instance learning.
Application to cancer detection on histopathological images [5.895585247199983]
In this work, we extend a state-of-the-art GP-based MIL method, which is called VGPMIL-PR, to exploit such correlation.
We show that our model achieves better results than other state-of-the-art probabilistic MIL methods.
arXiv Detail & Related papers (2023-10-30T08:57:59Z) - Reproducibility in Multiple Instance Learning: A Case For Algorithmic
Unit Tests [59.623267208433255]
Multiple Instance Learning (MIL) is a sub-domain of classification problems with positive and negative labels and a "bag" of inputs.
In this work, we examine five of the most prominent deep-MIL models and find that none of them respects the standard MIL assumption.
We identify and demonstrate this problem via a proposed "algorithmic unit test", where we create synthetic datasets that can be solved by a MIL respecting model.
arXiv Detail & Related papers (2023-10-27T03:05:11Z) - PDL: Regularizing Multiple Instance Learning with Progressive Dropout Layers [2.069061136213899]
Multiple instance learning (MIL) was a weakly supervised learning approach that sought to assign binary class labels to collections of instances known as bags.
We present a novel approach in the form of a Progressive Dropout Layer (PDL) to address overfitting and empower the MIL model in uncovering intricate and impactful feature representations.
arXiv Detail & Related papers (2023-08-19T21:20:30Z) - Unbiased Multiple Instance Learning for Weakly Supervised Video Anomaly
Detection [74.80595632328094]
Multiple Instance Learning (MIL) is prevailing in Weakly Supervised Video Anomaly Detection (WSVAD)
We propose a new MIL framework: Unbiased MIL (UMIL), to learn unbiased anomaly features that improve WSVAD.
arXiv Detail & Related papers (2023-03-22T08:11:22Z) - Probabilistic Attention based on Gaussian Processes for Deep Multiple
Instance Learning [12.594098548008832]
We introduce the Attention Gaussian Process (AGP) model, a novel probabilistic attention mechanism based on Gaussian Processes for deep MIL.
AGP provides accurate bag-level predictions as well as instance-level explainability, and can be trained end-to-end.
We experimentally show that predictive uncertainty correlates with the risk of wrong predictions, and therefore it is a good indicator of reliability in practice.
arXiv Detail & Related papers (2023-02-08T13:58:11Z) - Non-Gaussian Gaussian Processes for Few-Shot Regression [71.33730039795921]
We propose an invertible ODE-based mapping that operates on each component of the random variable vectors and shares the parameters across all of them.
NGGPs outperform the competing state-of-the-art approaches on a diversified set of benchmarks and applications.
arXiv Detail & Related papers (2021-10-26T10:45:25Z) - ProtoMIL: Multiple Instance Learning with Prototypical Parts for
Fine-Grained Interpretability [2.094672430475796]
Multiple Instance Learning (MIL) gains popularity in many real-life machine learning applications due to its weakly supervised nature.
In this paper, we introduce ProtoMIL, a novel self-explainable MIL method inspired by the case-based reasoning process that operates on visual prototypes.
Thanks to incorporating prototypical features into objects description, ProtoMIL unprecedentedly joins the model accuracy and fine-grained interpretability.
arXiv Detail & Related papers (2021-08-24T10:02:31Z) - Likelihood-Free Inference with Deep Gaussian Processes [70.74203794847344]
Surrogate models have been successfully used in likelihood-free inference to decrease the number of simulator evaluations.
We propose a Deep Gaussian Process (DGP) surrogate model that can handle more irregularly behaved target distributions.
Our experiments show how DGPs can outperform GPs on objective functions with multimodal distributions and maintain a comparable performance in unimodal cases.
arXiv Detail & Related papers (2020-06-18T14:24:05Z) - Neural Methods for Point-wise Dependency Estimation [129.93860669802046]
We focus on estimating point-wise dependency (PD), which quantitatively measures how likely two outcomes co-occur.
We demonstrate the effectiveness of our approaches in 1) MI estimation, 2) self-supervised representation learning, and 3) cross-modal retrieval task.
arXiv Detail & Related papers (2020-06-09T23:26:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.