Optimally Combining Classifiers for Semi-Supervised Learning
- URL: http://arxiv.org/abs/2006.04097v1
- Date: Sun, 7 Jun 2020 09:28:34 GMT
- Title: Optimally Combining Classifiers for Semi-Supervised Learning
- Authors: Zhiguo Wang, Liusha Yang, Feng Yin, Ke Lin, Qingjiang Shi, Zhi-Quan
Luo
- Abstract summary: We propose a new semi-supervised learning method that is able to adaptively combine the strengths of Xgboost and transductive support vector machine.
The experimental results on the UCI data sets and real commercial data set demonstrate the superior classification performance of our method over the five state-of-the-art algorithms.
- Score: 43.77365242185884
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper considers semi-supervised learning for tabular data. It is widely
known that Xgboost based on tree model works well on the heterogeneous features
while transductive support vector machine can exploit the low density
separation assumption. However, little work has been done to combine them
together for the end-to-end semi-supervised learning. In this paper, we find
these two methods have complementary properties and larger diversity, which
motivates us to propose a new semi-supervised learning method that is able to
adaptively combine the strengths of Xgboost and transductive support vector
machine. Instead of the majority vote rule, an optimization problem in terms of
ensemble weight is established, which helps to obtain more accurate pseudo
labels for unlabeled data. The experimental results on the UCI data sets and
real commercial data set demonstrate the superior classification performance of
our method over the five state-of-the-art algorithms improving test accuracy by
about $3\%-4\%$. The partial code can be found at
https://github.com/hav-cam-mit/CTO.
Related papers
- A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation [121.0693322732454]
Contrastive Language-Image Pretraining (CLIP) has gained popularity for its remarkable zero-shot capacity.
Recent research has focused on developing efficient fine-tuning methods to enhance CLIP's performance in downstream tasks.
We revisit a classical algorithm, Gaussian Discriminant Analysis (GDA), and apply it to the downstream classification of CLIP.
arXiv Detail & Related papers (2024-02-06T15:45:27Z) - Dense FixMatch: a simple semi-supervised learning method for pixel-wise
prediction tasks [68.36996813591425]
We propose Dense FixMatch, a simple method for online semi-supervised learning of dense and structured prediction tasks.
We enable the application of FixMatch in semi-supervised learning problems beyond image classification by adding a matching operation on the pseudo-labels.
Dense FixMatch significantly improves results compared to supervised learning using only labeled data, approaching its performance with 1/4 of the labeled samples.
arXiv Detail & Related papers (2022-10-18T15:02:51Z) - Few-Shot Non-Parametric Learning with Deep Latent Variable Model [50.746273235463754]
We propose Non-Parametric learning by Compression with Latent Variables (NPC-LV)
NPC-LV is a learning framework for any dataset with abundant unlabeled data but very few labeled ones.
We show that NPC-LV outperforms supervised methods on all three datasets on image classification in low data regime.
arXiv Detail & Related papers (2022-06-23T09:35:03Z) - KGBoost: A Classification-based Knowledge Base Completion Method with
Negative Sampling [29.14178162494542]
KGBoost is a new method to train a powerful classifier for missing link prediction.
We conduct experiments on multiple benchmark datasets, and demonstrate that KGBoost outperforms state-of-the-art methods across most datasets.
As compared with models trained by end-to-end optimization, KGBoost works well under the low-dimensional setting so as to allow a smaller model size.
arXiv Detail & Related papers (2021-12-17T06:19:37Z) - MIO : Mutual Information Optimization using Self-Supervised Binary
Contrastive Learning [19.5917119072985]
We model contrastive learning into a binary classification problem to predict if a pair is positive or not.
The proposed method outperforms the state-of-the-art algorithms on benchmark datasets like STL-10, CIFAR-10, CIFAR-100.
arXiv Detail & Related papers (2021-11-24T17:51:29Z) - Improving Contrastive Learning on Imbalanced Seed Data via Open-World
Sampling [96.8742582581744]
We present an open-world unlabeled data sampling framework called Model-Aware K-center (MAK)
MAK follows three simple principles: tailness, proximity, and diversity.
We demonstrate that MAK can consistently improve both the overall representation quality and the class balancedness of the learned features.
arXiv Detail & Related papers (2021-11-01T15:09:41Z) - C$^{4}$Net: Contextual Compression and Complementary Combination Network
for Salient Object Detection [0.0]
We show that feature concatenation works better than other combination methods like multiplication or addition.
Also, joint feature learning gives better results, because of the information sharing during their processing.
arXiv Detail & Related papers (2021-10-22T16:14:10Z) - Model-Change Active Learning in Graph-Based Semi-Supervised Learning [7.208515071018781]
"Model-change" active learning quantifies the resulting change incurred in the classifier by introducing the additional label(s)
We consider a family of convex loss functions for which the acquisition function can be efficiently approximated using the Laplace approximation of the posterior distribution.
arXiv Detail & Related papers (2021-10-14T21:47:10Z) - GRAD-MATCH: A Gradient Matching Based Data Subset Selection for
Efficient Learning [23.75284126177203]
We propose a general framework, GRAD-MATCH, which finds subsets that closely match the gradient of the training or validation set.
We show that GRAD-MATCH significantly and consistently outperforms several recent data-selection algorithms.
arXiv Detail & Related papers (2021-02-27T04:09:32Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.