Message Passing Adaptive Resonance Theory for Online Active
Semi-supervised Learning
- URL: http://arxiv.org/abs/2012.01227v2
- Date: Wed, 24 Feb 2021 10:04:51 GMT
- Title: Message Passing Adaptive Resonance Theory for Online Active
Semi-supervised Learning
- Authors: Taehyeong Kim, Injune Hwang, Hyundo Lee, Hyunseo Kim, Won-Seok Choi,
Joseph J. Lim, Byoung-Tak Zhang
- Abstract summary: We propose Message Passing Adaptive Resonance Theory (MPART) for online active semi-supervised learning.
MPART infers the class of unlabeled data and selects informative and representative samples through message passing between nodes on the topological graph.
We evaluate our model with comparable query selection strategies and frequencies, showing that MPART significantly outperforms the competitive models in online active learning environments.
- Score: 30.19936050747407
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Active learning is widely used to reduce labeling effort and training time by
repeatedly querying only the most beneficial samples from unlabeled data. In
real-world problems where data cannot be stored indefinitely due to limited
storage or privacy issues, the query selection and the model update should be
performed as soon as a new data sample is observed. Various online active
learning methods have been studied to deal with these challenges; however,
there are difficulties in selecting representative query samples and updating
the model efficiently. In this study, we propose Message Passing Adaptive
Resonance Theory (MPART) for online active semi-supervised learning. The
proposed model learns the distribution and topology of the input data online.
It then infers the class of unlabeled data and selects informative and
representative samples through message passing between nodes on the topological
graph. MPART queries the beneficial samples on-the-fly in stream-based
selective sampling scenarios, and continuously improve the classification model
using both labeled and unlabeled data. We evaluate our model with comparable
query selection strategies and frequencies, showing that MPART significantly
outperforms the competitive models in online active learning environments.
Related papers
- Negotiated Representations for Machine Mearning Application [0.0]
Overfitting is a phenomenon that occurs when a machine learning model is trained for too long and focused too much on the exact fitness of the training samples to the provided training labels.
We present an approach that increases the classification accuracy of machine learning models by allowing the model to negotiate output representations of the samples with previously determined class labels.
arXiv Detail & Related papers (2023-11-19T19:53:49Z) - Combating Label Noise With A General Surrogate Model For Sample
Selection [84.61367781175984]
We propose to leverage the vision-language surrogate model CLIP to filter noisy samples automatically.
We validate the effectiveness of our proposed method on both real-world and synthetic noisy datasets.
arXiv Detail & Related papers (2023-10-16T14:43:27Z) - Tackling Diverse Minorities in Imbalanced Classification [80.78227787608714]
Imbalanced datasets are commonly observed in various real-world applications, presenting significant challenges in training classifiers.
We propose generating synthetic samples iteratively by mixing data samples from both minority and majority classes.
We demonstrate the effectiveness of our proposed framework through extensive experiments conducted on seven publicly available benchmark datasets.
arXiv Detail & Related papers (2023-08-28T18:48:34Z) - Active Learning with Combinatorial Coverage [0.0]
Active learning is a practical field of machine learning that automates the process of selecting which data to label.
Current methods are effective in reducing the burden of data labeling but are heavily model-reliant.
This has led to the inability of sampled data to be transferred to new models as well as issues with sampling bias.
We propose active learning methods utilizing coverage to overcome these issues.
arXiv Detail & Related papers (2023-02-28T13:43:23Z) - Forgetful Active Learning with Switch Events: Efficient Sampling for
Out-of-Distribution Data [13.800680101300756]
In practice, fully trained neural networks interact randomly with out-of-distribution (OOD) inputs.
We introduce forgetful active learning with switch events (FALSE) - a novel active learning protocol for out-of-distribution active learning.
We report up to 4.5% accuracy improvements in over 270 experiments.
arXiv Detail & Related papers (2023-01-12T16:03:14Z) - Temporal Output Discrepancy for Loss Estimation-based Active Learning [65.93767110342502]
We present a novel deep active learning approach that queries the oracle for data annotation when the unlabeled sample is believed to incorporate high loss.
Our approach achieves superior performances than the state-of-the-art active learning methods on image classification and semantic segmentation tasks.
arXiv Detail & Related papers (2022-12-20T19:29:37Z) - Frugal Reinforcement-based Active Learning [12.18340575383456]
We propose a novel active learning approach for label-efficient training.
The proposed method is iterative and aims at minimizing a constrained objective function that mixes diversity, representativity and uncertainty criteria.
We also introduce a novel weighting mechanism based on reinforcement learning, which adaptively balances these criteria at each training iteration.
arXiv Detail & Related papers (2022-12-09T14:17:45Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Knowledge-driven Active Learning [70.37119719069499]
Active learning strategies aim at minimizing the amount of labelled data required to train a Deep Learning model.
Most active strategies are based on uncertain sample selection, and even often restricted to samples lying close to the decision boundary.
Here we propose to take into consideration common domain-knowledge and enable non-expert users to train a model with fewer samples.
arXiv Detail & Related papers (2021-10-15T06:11:53Z) - Online Active Model Selection for Pre-trained Classifiers [72.84853880948894]
We design an online selective sampling approach that actively selects informative examples to label and outputs the best model with high probability at any round.
Our algorithm can be used for online prediction tasks for both adversarial and streams.
arXiv Detail & Related papers (2020-10-19T19:53:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.