Cooperative learning for multi-view analysis
- URL: http://arxiv.org/abs/2112.12337v1
- Date: Thu, 23 Dec 2021 03:13:25 GMT
- Title: Cooperative learning for multi-view analysis
- Authors: Daisy Yi Ding, Robert Tibshirani
- Abstract summary: We propose a new method for supervised learning with multiple sets of features ("views")
Cooperative learning combines the usual squared error loss of predictions with an "agreement" penalty to encourage the predictions from different data views to agree.
We illustrate the effectiveness of our proposed method on simulated and real data examples.
- Score: 2.368995563245609
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new method for supervised learning with multiple sets of
features ("views"). Cooperative learning combines the usual squared error loss
of predictions with an "agreement" penalty to encourage the predictions from
different data views to agree. By varying the weight of the agreement penalty,
we get a continuum of solutions that include the well-known early and late
fusion approaches. Cooperative learning chooses the degree of agreement (or
fusion) in an adaptive manner, using a validation set or cross-validation to
estimate test set prediction error. One version of our fitting procedure is
modular, where one can choose different fitting mechanisms (e.g. lasso, random
forests, boosting, neural networks) appropriate for different data views. In
the setting of cooperative regularized linear regression, the method combines
the lasso penalty with the agreement penalty. The method can be especially
powerful when the different data views share some underlying relationship in
their signals that we aim to strengthen, while each view has its idiosyncratic
noise that we aim to reduce. We illustrate the effectiveness of our proposed
method on simulated and real data examples.
Related papers
- Regularized Contrastive Partial Multi-view Outlier Detection [76.77036536484114]
We propose a novel method named Regularized Contrastive Partial Multi-view Outlier Detection (RCPMOD)
In this framework, we utilize contrastive learning to learn view-consistent information and distinguish outliers by the degree of consistency.
Experimental results on four benchmark datasets demonstrate that our proposed approach could outperform state-of-the-art competitors.
arXiv Detail & Related papers (2024-08-02T14:34:27Z) - Conformalized Late Fusion Multi-View Learning [18.928543069018865]
Uncertainty quantification for multi-view learning is motivated by the increasing use of multi-view data in scientific problems.
A common variant of multi-view learning is late fusion: train separate predictors on individual views and combine them after single-view predictions are available.
We propose a novel methodology, Multi-View Conformal Prediction (MVCP), where conformal prediction is instead performed separately on the single-view predictors and only fused subsequently.
arXiv Detail & Related papers (2024-05-25T14:11:01Z) - Convergence Behavior of an Adversarial Weak Supervision Method [10.409652277630133]
Weak Supervision is a paradigm subsuming subareas of machine learning.
By using labeled data to train modern machine learning methods, the cost of acquiring large amounts of hand labeled data can be ameliorated.
Two approaches to combining the rules-of-thumb falls into two camps, reflecting different ideologies of statistical estimation.
arXiv Detail & Related papers (2024-05-25T02:33:17Z) - Collaborative Heterogeneous Causal Inference Beyond Meta-analysis [68.4474531911361]
We propose a collaborative inverse propensity score estimator for causal inference with heterogeneous data.
Our method shows significant improvements over the methods based on meta-analysis when heterogeneity increases.
arXiv Detail & Related papers (2024-04-24T09:04:36Z) - On the Out-of-Distribution Coverage of Combining Split Conformal
Prediction and Bayesian Deep Learning [1.131316248570352]
We focus on combining Bayesian deep learning with split conformal prediction and how this combination effects out-of-distribution coverage.
Our results suggest that combining Bayesian deep learning models with split conformal prediction can, in some cases, cause unintended consequences such as reducing out-of-distribution coverage.
arXiv Detail & Related papers (2023-11-21T15:50:37Z) - Ensemble Modeling for Multimodal Visual Action Recognition [50.38638300332429]
We propose an ensemble modeling approach for multimodal action recognition.
We independently train individual modality models using a variant of focal loss tailored to handle the long-tailed distribution of the MECCANO [21] dataset.
arXiv Detail & Related papers (2023-08-10T08:43:20Z) - Hybrid Predictive Coding: Inferring, Fast and Slow [62.997667081978825]
We propose a hybrid predictive coding network that combines both iterative and amortized inference in a principled manner.
We demonstrate that our model is inherently sensitive to its uncertainty and adaptively balances balances to obtain accurate beliefs using minimum computational expense.
arXiv Detail & Related papers (2022-04-05T12:52:45Z) - Learning Diverse Representations for Fast Adaptation to Distribution
Shift [78.83747601814669]
We present a method for learning multiple models, incorporating an objective that pressures each to learn a distinct way to solve the task.
We demonstrate our framework's ability to facilitate rapid adaptation to distribution shift.
arXiv Detail & Related papers (2020-06-12T12:23:50Z) - Learning from Aggregate Observations [82.44304647051243]
We study the problem of learning from aggregate observations where supervision signals are given to sets of instances.
We present a general probabilistic framework that accommodates a variety of aggregate observations.
Simple maximum likelihood solutions can be applied to various differentiable models.
arXiv Detail & Related papers (2020-04-14T06:18:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.