ASMFS: Adaptive-Similarity-based Multi-modality Feature Selection for
Classification of Alzheimer's Disease
- URL: http://arxiv.org/abs/2010.08190v1
- Date: Fri, 16 Oct 2020 06:53:27 GMT
- Title: ASMFS: Adaptive-Similarity-based Multi-modality Feature Selection for
Classification of Alzheimer's Disease
- Authors: Yuang Shi, Chen Zu, Mei Hong, Luping Zhou, Lei Wang, Xi Wu, Jiliu
Zhou, Daoqiang Zhang, Yan Wang
- Abstract summary: We propose a novel multi-modality feature selection method, which performs feature selection and local similarity learning simultaniously.
The effectiveness of our proposed joint learning method can be well demonstrated by the experimental results on Alzheimer's Disease Neuroimaging Initiative dataset.
- Score: 37.34130395221716
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the increasing amounts of high-dimensional heterogeneous data to be
processed, multi-modality feature selection has become an important research
direction in medical image analysis. Traditional methods usually depict the
data structure using fixed and predefined similarity matrix for each modality
separately, without considering the potential relationship structure across
different modalities. In this paper, we propose a novel multi-modality feature
selection method, which performs feature selection and local similarity
learning simultaniously. Specially, a similarity matrix is learned by jointly
considering different imaging modalities. And at the same time, feature
selection is conducted by imposing sparse l_{2, 1} norm constraint. The
effectiveness of our proposed joint learning method can be well demonstrated by
the experimental results on Alzheimer's Disease Neuroimaging Initiative (ADNI)
dataset, which outperforms existing the state-of-the-art multi-modality
approaches.
Related papers
- Supervised Multi-Modal Fission Learning [19.396207029419813]
Learning from multimodal datasets can leverage complementary information and improve performance in prediction tasks.
We propose a Multi-Modal Fission Learning model that simultaneously identifies globally joint, partially joint, and individual components.
arXiv Detail & Related papers (2024-09-30T17:58:03Z) - UniCell: Universal Cell Nucleus Classification via Prompt Learning [76.11864242047074]
We propose a universal cell nucleus classification framework (UniCell)
It employs a novel prompt learning mechanism to uniformly predict the corresponding categories of pathological images from different dataset domains.
In particular, our framework adopts an end-to-end architecture for nuclei detection and classification, and utilizes flexible prediction heads for adapting various datasets.
arXiv Detail & Related papers (2024-02-20T11:50:27Z) - Joint Self-Supervised and Supervised Contrastive Learning for Multimodal
MRI Data: Towards Predicting Abnormal Neurodevelopment [5.771221868064265]
We present a novel joint self-supervised and supervised contrastive learning method to learn the robust latent feature representation from multimodal MRI data.
Our method has the capability to facilitate computer-aided diagnosis within clinical practice, harnessing the power of multimodal data.
arXiv Detail & Related papers (2023-12-22T21:05:51Z) - Convolutional autoencoder-based multimodal one-class classification [80.52334952912808]
One-class classification refers to approaches of learning using data from a single class only.
We propose a deep learning one-class classification method suitable for multimodal data.
arXiv Detail & Related papers (2023-09-25T12:31:18Z) - Ambiguous Medical Image Segmentation using Diffusion Models [60.378180265885945]
We introduce a single diffusion model-based approach that produces multiple plausible outputs by learning a distribution over group insights.
Our proposed model generates a distribution of segmentation masks by leveraging the inherent sampling process of diffusion.
Comprehensive results show that our proposed approach outperforms existing state-of-the-art ambiguous segmentation networks.
arXiv Detail & Related papers (2023-04-10T17:58:22Z) - Consistency and Diversity induced Human Motion Segmentation [231.36289425663702]
We propose a novel Consistency and Diversity induced human Motion (CDMS) algorithm.
Our model factorizes the source and target data into distinct multi-layer feature spaces.
A multi-mutual learning strategy is carried out to reduce the domain gap between the source and target data.
arXiv Detail & Related papers (2022-02-10T06:23:56Z) - Multivariate feature ranking of gene expression data [62.997667081978825]
We propose two new multivariate feature ranking methods based on pairwise correlation and pairwise consistency.
We statistically prove that the proposed methods outperform the state of the art feature ranking methods Clustering Variation, Chi Squared, Correlation, Information Gain, ReliefF and Significance.
arXiv Detail & Related papers (2021-11-03T17:19:53Z) - AMMASurv: Asymmetrical Multi-Modal Attention for Accurate Survival
Analysis with Whole Slide Images and Gene Expression Data [2.0329335234511974]
We propose a new asymmetrical multi-modal method, termed as AMMASurv.
AMMASurv can effectively utilize the intrinsic information within every modality and flexibly adapts to the modalities of different importance.
arXiv Detail & Related papers (2021-08-28T04:02:10Z) - Orthogonal Statistical Inference for Multimodal Data Analysis [5.010425616264462]
Multimodal imaging has transformed neuroscience research.
It is difficult to combine the merits of interpretability attributed to a simple association model and flexibility achieved by a highly adaptive nonlinear model.
arXiv Detail & Related papers (2021-03-12T05:04:31Z) - Self-Supervised Multimodal Domino: in Search of Biomarkers for
Alzheimer's Disease [19.86082635340699]
We propose a taxonomy of all reasonable ways to organize self-supervised representation-learning algorithms.
We first evaluate models on toy multimodal MNIST datasets and then apply them to a multimodal neuroimaging dataset with Alzheimer's disease patients.
Results show that the proposed approach outperforms previous self-supervised encoder-decoder methods.
arXiv Detail & Related papers (2020-12-25T20:28:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.