Multiscale Laplacian Learning
- URL: http://arxiv.org/abs/2109.03718v1
- Date: Wed, 8 Sep 2021 15:25:32 GMT
- Title: Multiscale Laplacian Learning
- Authors: Ekaterina Merkurjev, Duc DUy Nguyen, and Guo-Wei Wei
- Abstract summary: This paper presents two innovative multiscale Laplacian learning approaches for machine learning tasks.
The first approach, called multi Kernel manifold learning (MML), integrates manifold learning with multi Kernel information.
The second approach, called the multiscale MBO (MMBO) method, introduces multiscale Laplacians to a modification of the famous classical Merriman-Bence-Osher scheme.
- Score: 3.24029503704305
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning methods have greatly changed science, engineering, finance,
business, and other fields. Despite the tremendous accomplishments of machine
learning and deep learning methods, many challenges still remain. In
particular, the performance of machine learning methods is often severely
affected in case of diverse data, usually associated with smaller data sets or
data related to areas of study where the size of the data sets is constrained
by the complexity and/or high cost of experiments. Moreover, data with limited
labeled samples is a challenge to most learning approaches. In this paper, the
aforementioned challenges are addressed by integrating graph-based frameworks,
multiscale structure, modified and adapted optimization procedures and
semi-supervised techniques. This results in two innovative multiscale Laplacian
learning (MLL) approaches for machine learning tasks, such as data
classification, and for tackling diverse data, data with limited samples and
smaller data sets. The first approach, called multikernel manifold learning
(MML), integrates manifold learning with multikernel information and solves a
regularization problem consisting of a loss function and a warped kernel
regularizer using multiscale graph Laplacians. The second approach, called the
multiscale MBO (MMBO) method, introduces multiscale Laplacians to a
modification of the famous classical Merriman-Bence-Osher (MBO) scheme, and
makes use of fast solvers for finding the approximations to the extremal
eigenvectors of the graph Laplacian. We demonstrate the performance of our
methods experimentally on a variety of data sets, such as biological, text and
image data, and compare them favorably to existing approaches.
Related papers
- Classifier-guided Gradient Modulation for Enhanced Multimodal Learning [50.7008456698935]
Gradient-Guided Modulation (CGGM) is a novel method to balance multimodal learning with gradients.
We conduct extensive experiments on four multimodal datasets: UPMC-Food 101, CMU-MOSI, IEMOCAP and BraTS.
CGGM outperforms all the baselines and other state-of-the-art methods consistently.
arXiv Detail & Related papers (2024-11-03T02:38:43Z) - Robust Analysis of Multi-Task Learning Efficiency: New Benchmarks on Light-Weighed Backbones and Effective Measurement of Multi-Task Learning Challenges by Feature Disentanglement [69.51496713076253]
In this paper, we focus on the aforementioned efficiency aspects of existing MTL methods.
We first carry out large-scale experiments of the methods with smaller backbones and on a the MetaGraspNet dataset as a new test ground.
We also propose Feature Disentanglement measure as a novel and efficient identifier of the challenges in MTL.
arXiv Detail & Related papers (2024-02-05T22:15:55Z) - Data-induced multiscale losses and efficient multirate gradient descent
schemes [6.299435779277399]
This paper reveals multiscale structures in the loss landscape, including its gradients and Hessians inherited from the data.
It introduces a novel gradient descent approach, drawing inspiration from multiscale algorithms used in scientific computing.
arXiv Detail & Related papers (2024-02-05T14:00:53Z) - Multimodal Representation Learning by Alternating Unimodal Adaptation [73.15829571740866]
We propose MLA (Multimodal Learning with Alternating Unimodal Adaptation) to overcome challenges where some modalities appear more dominant than others during multimodal learning.
MLA reframes the conventional joint multimodal learning process by transforming it into an alternating unimodal learning process.
It captures cross-modal interactions through a shared head, which undergoes continuous optimization across different modalities.
Experiments are conducted on five diverse datasets, encompassing scenarios with complete modalities and scenarios with missing modalities.
arXiv Detail & Related papers (2023-11-17T18:57:40Z) - MixUp-MIL: A Study on Linear & Multilinear Interpolation-Based Data
Augmentation for Whole Slide Image Classification [1.5810132476010594]
We investigate a data augmentation technique for classifying digital whole slide images.
The results show an extraordinarily high variability in the effect of the method.
We identify several interesting aspects to bring light into the darkness and identified novel promising fields of research.
arXiv Detail & Related papers (2023-11-06T12:00:53Z) - MinT: Boosting Generalization in Mathematical Reasoning via Multi-View
Fine-Tuning [53.90744622542961]
Reasoning in mathematical domains remains a significant challenge for small language models (LMs)
We introduce a new method that exploits existing mathematical problem datasets with diverse annotation styles.
Experimental results show that our strategy enables a LLaMA-7B model to outperform prior approaches.
arXiv Detail & Related papers (2023-07-16T05:41:53Z) - Multiple Instance Learning for Detecting Anomalies over Sequential
Real-World Datasets [2.427831679672374]
Multiple Instance Learning (MIL) has been shown effective on problems with incomplete knowledge of labels in the training dataset.
We propose an MIL-based formulation and various algorithmic instantiations of this framework based on different design decisions.
The framework generalizes well over diverse datasets resulting from different real-world application domains.
arXiv Detail & Related papers (2022-10-04T16:02:09Z) - Multi-model Ensemble Learning Method for Human Expression Recognition [31.76775306959038]
We propose our solution based on the ensemble learning method to capture large amounts of real-life data.
We conduct many experiments on the AffWild2 dataset of the ABAW2022 Challenge, and the results demonstrate the effectiveness of our solution.
arXiv Detail & Related papers (2022-03-28T03:15:06Z) - Consistency and Diversity induced Human Motion Segmentation [231.36289425663702]
We propose a novel Consistency and Diversity induced human Motion (CDMS) algorithm.
Our model factorizes the source and target data into distinct multi-layer feature spaces.
A multi-mutual learning strategy is carried out to reduce the domain gap between the source and target data.
arXiv Detail & Related papers (2022-02-10T06:23:56Z) - Deep invariant networks with differentiable augmentation layers [87.22033101185201]
Methods for learning data augmentation policies require held-out data and are based on bilevel optimization problems.
We show that our approach is easier and faster to train than modern automatic data augmentation techniques.
arXiv Detail & Related papers (2022-02-04T14:12:31Z) - Enhancing ensemble learning and transfer learning in multimodal data
analysis by adaptive dimensionality reduction [10.646114896709717]
In multimodal data analysis, not all observations would show the same level of reliability or information quality.
We propose an adaptive approach for dimensionality reduction to overcome this issue.
We test our approach on multimodal datasets acquired in diverse research fields.
arXiv Detail & Related papers (2021-05-08T11:53:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.