Adaptive Weighted Multiview Kernel Matrix Factorization with its
application in Alzheimer's Disease Analysis -- A clustering Perspective
- URL: http://arxiv.org/abs/2303.04154v1
- Date: Tue, 7 Mar 2023 16:05:24 GMT
- Title: Adaptive Weighted Multiview Kernel Matrix Factorization with its
application in Alzheimer's Disease Analysis -- A clustering Perspective
- Authors: Kai Liu and Yarui Cao
- Abstract summary: We propose a novel model to leverage data from all different modalities/views, which can learn the weights of each view adaptively.
Experimental results on ADNI dataset demonstrate the effectiveness of our proposed method.
- Score: 3.3843930118195407
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent technology and equipment advancements provide with us opportunities to
better analyze Alzheimer's disease (AD), where we could collect and employ the
data from different image and genetic modalities that may potentially enhance
the predictive performance. To perform better clustering in AD analysis, in
this paper we propose a novel model to leverage data from all different
modalities/views, which can learn the weights of each view adaptively.
Different from previous vanilla Non-negative Matrix Factorization which assumes
data is linearly separable, we propose a simple yet efficient method based on
kernel matrix factorization, which is not only able to deal with non-linear
data structure but also can achieve better prediction accuracy. Experimental
results on ADNI dataset demonstrate the effectiveness of our proposed method,
which indicate promising prospects of kernel application in AD analysis.
Related papers
- fastHDMI: Fast Mutual Information Estimation for High-Dimensional Data [2.9901605297536027]
We introduce fastHDMI, a Python package designed for efficient variable screening in high-dimensional datasets.
This work pioneers the application of three mutual information estimation methods for neuroimaging variable selection.
arXiv Detail & Related papers (2024-10-14T01:49:53Z) - The Common Stability Mechanism behind most Self-Supervised Learning
Approaches [64.40701218561921]
We provide a framework to explain the stability mechanism of different self-supervised learning techniques.
We discuss the working mechanism of contrastive techniques like SimCLR, non-contrastive techniques like BYOL, SWAV, SimSiam, Barlow Twins, and DINO.
We formulate different hypotheses and test them using the Imagenet100 dataset.
arXiv Detail & Related papers (2024-02-22T20:36:24Z) - Toward the Identifiability of Comparative Deep Generative Models [7.5479347719819865]
We propose a theory of identifiability for comparative Deep Generative Models (DGMs)
We show that, while these models lack identifiability across a general class of mixing functions, they surprisingly become identifiable when the mixing function is piece-wise affine.
We also investigate the impact of model misspecification, and empirically show that previously proposed regularization techniques for fitting comparative DGMs help with identifiability when the number of latent variables is not known in advance.
arXiv Detail & Related papers (2024-01-29T06:10:54Z) - Functional Graphical Models: Structure Enables Offline Data-Driven Optimization [111.28605744661638]
We show how structure can enable sample-efficient data-driven optimization.
We also present a data-driven optimization algorithm that infers the FGM structure itself.
arXiv Detail & Related papers (2024-01-08T22:33:14Z) - Predictive Heterogeneity: Measures and Applications [26.85283526483783]
We propose the emphusable predictive heterogeneity, which takes into account the model capacity and computational constraints.
We show that it can be reliably estimated from finite data with probably approximately correct (PAC) bounds.
Empirically, the explored heterogeneity provides insights for sub-population divisions in income prediction, crop yield prediction and image classification tasks.
arXiv Detail & Related papers (2023-04-01T12:20:06Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Scalable Regularised Joint Mixture Models [2.0686407686198263]
In many applications, data can be heterogeneous in the sense of spanning latent groups with different underlying distributions.
We propose an approach for heterogeneous data that allows joint learning of (i) explicit multivariate feature distributions, (ii) high-dimensional regression models and (iii) latent group labels.
The approach is demonstrably effective in high dimensions, combining data reduction for computational efficiency with a re-weighting scheme that retains key signals even when the number of features is large.
arXiv Detail & Related papers (2022-05-03T13:38:58Z) - Using Explainable Boosting Machine to Compare Idiographic and Nomothetic
Approaches for Ecological Momentary Assessment Data [2.0824228840987447]
This paper explores the use of non-linear interpretable machine learning (ML) models in classification problems.
Various ensembles of trees are compared to linear models using imbalanced synthetic and real-world datasets.
In one of the two real-world datasets, knowledge distillation method achieves improved AUC scores.
arXiv Detail & Related papers (2022-04-04T17:56:37Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - Cauchy-Schwarz Regularized Autoencoder [68.80569889599434]
Variational autoencoders (VAE) are a powerful and widely-used class of generative models.
We introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
Our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
arXiv Detail & Related papers (2021-01-06T17:36:26Z) - Bayesian Sparse Factor Analysis with Kernelized Observations [67.60224656603823]
Multi-view problems can be faced with latent variable models.
High-dimensionality and non-linear issues are traditionally handled by kernel methods.
We propose merging both approaches into single model.
arXiv Detail & Related papers (2020-06-01T14:25:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.