Interventional Multi-Instance Learning with Deconfounded Instance-Level
Prediction
- URL: http://arxiv.org/abs/2204.09204v2
- Date: Fri, 22 Apr 2022 06:00:15 GMT
- Title: Interventional Multi-Instance Learning with Deconfounded Instance-Level
Prediction
- Authors: Tiancheng Lin, Hongteng Xu, Canqian Yang and Yi Xu
- Abstract summary: We propose a novel interventional multi-instance learning (IMIL) framework to achieve deconfounded instance-level prediction.
Unlike traditional likelihood-based strategies, we design an Expectation-Maximization (EM) algorithm based on causal intervention.
Our IMIL method substantially reduces false positives and outperforms state-of-the-art MIL methods.
- Score: 29.151629044965983
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When applying multi-instance learning (MIL) to make predictions for bags of
instances, the prediction accuracy of an instance often depends on not only the
instance itself but also its context in the corresponding bag. From the
viewpoint of causal inference, such bag contextual prior works as a confounder
and may result in model robustness and interpretability issues. Focusing on
this problem, we propose a novel interventional multi-instance learning (IMIL)
framework to achieve deconfounded instance-level prediction. Unlike traditional
likelihood-based strategies, we design an Expectation-Maximization (EM)
algorithm based on causal intervention, providing a robust instance selection
in the training phase and suppressing the bias caused by the bag contextual
prior. Experiments on pathological image analysis demonstrate that our IMIL
method substantially reduces false positives and outperforms state-of-the-art
MIL methods.
Related papers
- Attention Is Not What You Need: Revisiting Multi-Instance Learning for Whole Slide Image Classification [51.95824566163554]
We argue that synergizing the standard MIL assumption with variational inference encourages the model to focus on tumour morphology instead of spurious correlations.
Our method also achieves better classification boundaries for identifying hard instances and mitigates the effect of spurious correlations between bags and labels.
arXiv Detail & Related papers (2024-08-18T12:15:22Z) - cDP-MIL: Robust Multiple Instance Learning via Cascaded Dirichlet Process [23.266122629592807]
Multiple instance learning (MIL) has been extensively applied to whole slide histoparametric image (WSI) analysis.
The existing aggregation strategy in MIL, which primarily relies on the first-order distance between instances, fails to accurately approximate the true feature distribution of each instance.
We propose a new Bayesian nonparametric framework for multiple instance learning, which adopts a cascade of Dirichlet processes (cDP) to incorporate the instance-to-bag characteristic of the WSIs.
arXiv Detail & Related papers (2024-07-16T07:28:39Z) - Dynamic Policy-Driven Adaptive Multi-Instance Learning for Whole Slide
Image Classification [26.896926631411652]
Multi-Instance Learning (MIL) has shown impressive performance for histopathology whole slide image (WSI) analysis using bags or pseudo-bags.
Existing MIL-based technologies at least suffer from one or more of the following problems: 1) requiring high storage and intensive pre-processing for numerous instances (sampling); 2) potential over-fitting with limited knowledge to predict bag labels (feature representation); 3) pseudo-bag counts and prior biases affect model robustness and generalizability (decision-making)
arXiv Detail & Related papers (2024-03-09T04:43:24Z) - Resilient Multiple Choice Learning: A learned scoring scheme with
application to audio scene analysis [8.896068269039452]
We introduce Resilient Multiple Choice Learning (rMCL) for conditional distribution estimation in regression settings.
rMCL is a simple framework to tackle multimodal density estimation, using the Winner-Takes-All (WTA) loss for a set of hypotheses.
arXiv Detail & Related papers (2023-11-02T07:54:03Z) - Active Learning Principles for In-Context Learning with Large Language
Models [65.09970281795769]
This paper investigates how Active Learning algorithms can serve as effective demonstration selection methods for in-context learning.
We show that in-context example selection through AL prioritizes high-quality examples that exhibit low uncertainty and bear similarity to the test examples.
arXiv Detail & Related papers (2023-05-23T17:16:04Z) - ADDMU: Detection of Far-Boundary Adversarial Examples with Data and
Model Uncertainty Estimation [125.52743832477404]
Adversarial Examples Detection (AED) is a crucial defense technique against adversarial attacks.
We propose a new technique, textbfADDMU, which combines two types of uncertainty estimation for both regular and FB adversarial example detection.
Our new method outperforms previous methods by 3.6 and 6.0 emphAUC points under each scenario.
arXiv Detail & Related papers (2022-10-22T09:11:12Z) - Balancing Bias and Variance for Active Weakly Supervised Learning [9.145168943972067]
Modern multiple instance learning (MIL) models achieve competitive performance at the bag level.
However, instance-level prediction, which is essential for many important applications, is unsatisfactory.
We propose a novel deep subset that aims to boost the instance-level prediction.
Experiments conducted over multiple real-world datasets clearly demonstrate the state-of-the-art instance-level prediction.
arXiv Detail & Related papers (2022-06-12T07:15:35Z) - Risk Consistent Multi-Class Learning from Label Proportions [64.0125322353281]
This study addresses a multiclass learning from label proportions (MCLLP) setting in which training instances are provided in bags.
Most existing MCLLP methods impose bag-wise constraints on the prediction of instances or assign them pseudo-labels.
A risk-consistent method is proposed for instance classification using the empirical risk minimization framework.
arXiv Detail & Related papers (2022-03-24T03:49:04Z) - A One-step Approach to Covariate Shift Adaptation [82.01909503235385]
A default assumption in many machine learning scenarios is that the training and test samples are drawn from the same probability distribution.
We propose a novel one-step approach that jointly learns the predictive model and the associated weights in one optimization.
arXiv Detail & Related papers (2020-07-08T11:35:47Z) - Video Prediction via Example Guidance [156.08546987158616]
In video prediction tasks, one major challenge is to capture the multi-modal nature of future contents and dynamics.
In this work, we propose a simple yet effective framework that can efficiently predict plausible future states.
arXiv Detail & Related papers (2020-07-03T14:57:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.