A Novel Random Forest Dissimilarity Measure for Multi-View Learning
- URL: http://arxiv.org/abs/2007.02572v1
- Date: Mon, 6 Jul 2020 07:54:52 GMT
- Title: A Novel Random Forest Dissimilarity Measure for Multi-View Learning
- Authors: Hongliu Cao, Simon Bernard, Robert Sabourin, Laurent Heutte
- Abstract summary: Two methods are proposed, which modify the Random Forest proximity measure, to adapt it to the context of High Dimension Low Sample Size (HDLSS) multi-view classification problems.
The second method, based on an Instance Hardness measurement, is significantly more accurate than other state-of-the-art measurements.
- Score: 8.185807285320553
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-view learning is a learning task in which data is described by several
concurrent representations. Its main challenge is most often to exploit the
complementarities between these representations to help solve a
classification/regression task. This is a challenge that can be met nowadays if
there is a large amount of data available for learning. However, this is not
necessarily true for all real-world problems, where data are sometimes scarce
(e.g. problems related to the medical environment). In these situations, an
effective strategy is to use intermediate representations based on the
dissimilarities between instances. This work presents new ways of constructing
these dissimilarity representations, learning them from data with Random Forest
classifiers. More precisely, two methods are proposed, which modify the Random
Forest proximity measure, to adapt it to the context of High Dimension Low
Sample Size (HDLSS) multi-view classification problems. The second method,
based on an Instance Hardness measurement, is significantly more accurate than
other state-of-the-art measurements including the original RF Proximity
measurement and the Large Margin Nearest Neighbor (LMNN) metric learning
measurement.
Related papers
- Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Unsupervised Estimation of Ensemble Accuracy [0.0]
We present a method for estimating the joint power of several classifiers.
It differs from existing approaches which focus on "diversity" measures by not relying on labels.
We demonstrate the method on popular large-scale face recognition datasets.
arXiv Detail & Related papers (2023-11-18T02:31:36Z) - DCID: Deep Canonical Information Decomposition [84.59396326810085]
We consider the problem of identifying the signal shared between two one-dimensional target variables.
We propose ICM, an evaluation metric which can be used in the presence of ground-truth labels.
We also propose Deep Canonical Information Decomposition (DCID) - a simple, yet effective approach for learning the shared variables.
arXiv Detail & Related papers (2023-06-27T16:59:06Z) - dCAM: Dimension-wise Class Activation Map for Explaining Multivariate
Data Series Classification [18.382700339944524]
We describe a convolutional architecture that enables the comparison of dimensions.
We then propose a method that returns dCAM, a Dimension-wise Class Activation Map.
arXiv Detail & Related papers (2022-07-25T13:04:05Z) - Compare learning: bi-attention network for few-shot learning [6.559037166322981]
One of the Few-shot learning methods called metric learning addresses this challenge by first learning a deep distance metric to determine whether a pair of images belong to the same category.
In this paper, we propose a novel approach named Bi-attention network to compare the instances, which can measure the similarity between embeddings of instances precisely, globally and efficiently.
arXiv Detail & Related papers (2022-03-25T07:39:10Z) - On Modality Bias Recognition and Reduction [70.69194431713825]
We study the modality bias problem in the context of multi-modal classification.
We propose a plug-and-play loss function method, whereby the feature space for each label is adaptively learned.
Our method yields remarkable performance improvements compared with the baselines.
arXiv Detail & Related papers (2022-02-25T13:47:09Z) - DPER: Efficient Parameter Estimation for Randomly Missing Data [0.24466725954625884]
We propose novel algorithms to find the maximum likelihood estimates (MLEs) for a one-class/multiple-class randomly missing data set.
Our algorithms do not require multiple iterations through the data, thus promising to be less time-consuming than other methods.
arXiv Detail & Related papers (2021-06-06T16:37:48Z) - Few-shot Partial Multi-view Learning [103.33865779721458]
We propose a new task called few-shot partial multi-view learning.
It focuses on overcoming the negative impact of the view-missing issue in the low-data regime.
We conduct extensive experiments to evaluate our method.
arXiv Detail & Related papers (2021-05-05T13:34:43Z) - Hyperspherical embedding for novel class classification [1.5952956981784217]
We present a constraint-based approach applied to representations in the latent space under the normalized softmax loss.
We experimentally validate the proposed approach for the classification of unseen classes on different datasets using both metric learning and the normalized softmax loss.
Our results show that not only our proposed strategy can be efficiently trained on larger set of classes, as it does not require pairwise learning, but also present better classification results than the metric learning strategies.
arXiv Detail & Related papers (2021-02-05T15:42:13Z) - Meta-Generating Deep Attentive Metric for Few-shot Classification [53.07108067253006]
We present a novel deep metric meta-generation method to generate a specific metric for a new few-shot learning task.
In this study, we structure the metric using a three-layer deep attentive network that is flexible enough to produce a discriminative metric for each task.
We gain surprisingly obvious performance improvement over state-of-the-art competitors, especially in the challenging cases.
arXiv Detail & Related papers (2020-12-03T02:07:43Z) - Multi-task Supervised Learning via Cross-learning [102.64082402388192]
We consider a problem known as multi-task learning, consisting of fitting a set of regression functions intended for solving different tasks.
In our novel formulation, we couple the parameters of these functions, so that they learn in their task specific domains while staying close to each other.
This facilitates cross-fertilization in which data collected across different domains help improving the learning performance at each other task.
arXiv Detail & Related papers (2020-10-24T21:35:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.