Enhancing ensemble learning and transfer learning in multimodal data
analysis by adaptive dimensionality reduction
- URL: http://arxiv.org/abs/2105.03682v1
- Date: Sat, 8 May 2021 11:53:12 GMT
- Title: Enhancing ensemble learning and transfer learning in multimodal data
analysis by adaptive dimensionality reduction
- Authors: Andrea Marinoni, Saloua Chlaily, Eduard Khachatrian, Torbj{\o}rn
Eltoft, Sivasakthy Selvakumaran, Mark Girolami, Christian Jutten
- Abstract summary: In multimodal data analysis, not all observations would show the same level of reliability or information quality.
We propose an adaptive approach for dimensionality reduction to overcome this issue.
We test our approach on multimodal datasets acquired in diverse research fields.
- Score: 10.646114896709717
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Modern data analytics take advantage of ensemble learning and transfer
learning approaches to tackle some of the most relevant issues in data
analysis, such as lack of labeled data to use to train the analysis models,
sparsity of the information, and unbalanced distributions of the records.
Nonetheless, when applied to multimodal datasets (i.e., datasets acquired by
means of multiple sensing techniques or strategies), the state-of-theart
methods for ensemble learning and transfer learning might show some
limitations. In fact, in multimodal data analysis, not all observations would
show the same level of reliability or information quality, nor an homogeneous
distribution of errors and uncertainties. This condition might undermine the
classic assumptions ensemble learning and transfer learning methods rely on. In
this work, we propose an adaptive approach for dimensionality reduction to
overcome this issue. By means of a graph theory-based approach, the most
relevant features across variable size subsets of the considered datasets are
identified. This information is then used to set-up ensemble learning and
transfer learning architectures. We test our approach on multimodal datasets
acquired in diverse research fields (remote sensing, brain-computer interfaces,
photovoltaic energy). Experimental results show the validity and the robustness
of our approach, able to outperform state-of-the-art techniques.
Related papers
- Cross-feature Contrastive Loss for Decentralized Deep Learning on
Heterogeneous Data [8.946847190099206]
We present a novel approach for decentralized learning on heterogeneous data.
Cross-features for a pair of neighboring agents are the features obtained from the data of an agent with respect to the model parameters of the other agent.
Our experiments show that the proposed method achieves superior performance (0.2-4% improvement in test accuracy) compared to other existing techniques for decentralized learning on heterogeneous data.
arXiv Detail & Related papers (2023-10-24T14:48:23Z) - A Survey of Learning on Small Data: Generalization, Optimization, and
Challenge [101.27154181792567]
Learning on small data that approximates the generalization ability of big data is one of the ultimate purposes of AI.
This survey follows the active sampling theory under a PAC framework to analyze the generalization error and label complexity of learning on small data.
Multiple data applications that may benefit from efficient small data representation are surveyed.
arXiv Detail & Related papers (2022-07-29T02:34:19Z) - Equivariance Allows Handling Multiple Nuisance Variables When Analyzing
Pooled Neuroimaging Datasets [53.34152466646884]
In this paper, we show how bringing recent results on equivariant representation learning instantiated on structured spaces together with simple use of classical results on causal inference provides an effective practical solution.
We demonstrate how our model allows dealing with more than one nuisance variable under some assumptions and can enable analysis of pooled scientific datasets in scenarios that would otherwise entail removing a large portion of the samples.
arXiv Detail & Related papers (2022-03-29T04:54:06Z) - Deep invariant networks with differentiable augmentation layers [87.22033101185201]
Methods for learning data augmentation policies require held-out data and are based on bilevel optimization problems.
We show that our approach is easier and faster to train than modern automatic data augmentation techniques.
arXiv Detail & Related papers (2022-02-04T14:12:31Z) - A graph representation based on fluid diffusion model for multimodal
data analysis: theoretical aspects and enhanced community detection [14.601444144225875]
We introduce a novel model for graph definition based on fluid diffusion.
Our method is able to strongly outperform state-of-the-art schemes for community detection in multimodal data analysis.
arXiv Detail & Related papers (2021-12-07T16:30:03Z) - Non-IID data and Continual Learning processes in Federated Learning: A
long road ahead [58.720142291102135]
Federated Learning is a novel framework that allows multiple devices or institutions to train a machine learning model collaboratively while preserving their data private.
In this work, we formally classify data statistical heterogeneity and review the most remarkable learning strategies that are able to face it.
At the same time, we introduce approaches from other machine learning frameworks, such as Continual Learning, that also deal with data heterogeneity and could be easily adapted to the Federated Learning settings.
arXiv Detail & Related papers (2021-11-26T09:57:11Z) - Multiscale Laplacian Learning [3.24029503704305]
This paper presents two innovative multiscale Laplacian learning approaches for machine learning tasks.
The first approach, called multi Kernel manifold learning (MML), integrates manifold learning with multi Kernel information.
The second approach, called the multiscale MBO (MMBO) method, introduces multiscale Laplacians to a modification of the famous classical Merriman-Bence-Osher scheme.
arXiv Detail & Related papers (2021-09-08T15:25:32Z) - Quasi-Global Momentum: Accelerating Decentralized Deep Learning on
Heterogeneous Data [77.88594632644347]
Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks.
In realistic learning scenarios, the presence of heterogeneity across different clients' local datasets poses an optimization challenge.
We propose a novel momentum-based method to mitigate this decentralized training difficulty.
arXiv Detail & Related papers (2021-02-09T11:27:14Z) - Nonparametric Estimation of Heterogeneous Treatment Effects: From Theory
to Learning Algorithms [91.3755431537592]
We analyze four broad meta-learning strategies which rely on plug-in estimation and pseudo-outcome regression.
We highlight how this theoretical reasoning can be used to guide principled algorithm design and translate our analyses into practice.
arXiv Detail & Related papers (2021-01-26T17:11:40Z) - Siloed Federated Learning for Multi-Centric Histopathology Datasets [0.17842332554022694]
This paper proposes a novel federated learning approach for deep learning architectures in the medical domain.
Local-statistic batch normalization (BN) layers are introduced, resulting in collaboratively-trained, yet center-specific models.
We benchmark the proposed method on the classification of tumorous histopathology image patches extracted from the Camelyon16 and Camelyon17 datasets.
arXiv Detail & Related papers (2020-08-17T15:49:30Z) - Multi-Objective Genetic Programming for Manifold Learning: Balancing
Quality and Dimensionality [4.4181317696554325]
State-of-the-art manifold learning algorithms are opaque in how they perform this transformation.
We introduce a multi-objective approach that automatically balances the competing objectives of manifold quality and dimensionality.
Our proposed approach is competitive with a range of baseline and state-of-the-art manifold learning methods.
arXiv Detail & Related papers (2020-01-05T23:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.