MetAL: Active Semi-Supervised Learning on Graphs via Meta Learning
- URL: http://arxiv.org/abs/2007.11230v1
- Date: Wed, 22 Jul 2020 06:59:49 GMT
- Title: MetAL: Active Semi-Supervised Learning on Graphs via Meta Learning
- Authors: Kaushalya Madhawa and Tsuyoshi Murata
- Abstract summary: We propose MetAL, an AL approach that selects unlabeled instances that directly improve the future performance of a classification model.
We demonstrate that MetAL efficiently outperforms existing state-of-the-art AL algorithms.
- Score: 2.903711704663904
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The objective of active learning (AL) is to train classification models with
less number of labeled instances by selecting only the most informative
instances for labeling. The AL algorithms designed for other data types such as
images and text do not perform well on graph-structured data. Although a few
heuristics-based AL algorithms have been proposed for graphs, a principled
approach is lacking. In this paper, we propose MetAL, an AL approach that
selects unlabeled instances that directly improve the future performance of a
classification model. For a semi-supervised learning problem, we formulate the
AL task as a bilevel optimization problem. Based on recent work in
meta-learning, we use the meta-gradients to approximate the impact of
retraining the model with any unlabeled instance on the model performance.
Using multiple graph datasets belonging to different domains, we demonstrate
that MetAL efficiently outperforms existing state-of-the-art AL algorithms.
Related papers
- AutoAL: Automated Active Learning with Differentiable Query Strategy Search [18.23964720426325]
This work presents the first differentiable active learning strategy search method, named AutoAL.
For any given task, SearchNet and FitNet are iteratively co-optimized using the labeled data, learning how well a set of candidate AL algorithms perform on that task.
AutoAL consistently achieves superior accuracy compared to all candidate AL algorithms and other selective AL approaches.
arXiv Detail & Related papers (2024-10-17T17:59:09Z) - Can Large Language Models Analyze Graphs like Professionals? A Benchmark, Datasets and Models [90.98855064914379]
We introduce ProGraph, a benchmark for large language models (LLMs) to process graphs.
Our findings reveal that the performance of current LLMs is unsatisfactory, with the best model achieving only 36% accuracy.
We propose LLM4Graph datasets, which include crawled documents and auto-generated codes based on 6 widely used graph libraries.
arXiv Detail & Related papers (2024-09-29T11:38:45Z) - MoBYv2AL: Self-supervised Active Learning for Image Classification [57.4372176671293]
We present MoBYv2AL, a novel self-supervised active learning framework for image classification.
Our contribution lies in lifting MoBY, one of the most successful self-supervised learning algorithms, to the AL pipeline.
We achieve state-of-the-art results when compared to recent AL methods.
arXiv Detail & Related papers (2023-01-04T10:52:02Z) - Active Learning by Feature Mixing [52.16150629234465]
We propose a novel method for batch active learning called ALFA-Mix.
We identify unlabelled instances with sufficiently-distinct features by seeking inconsistencies in predictions.
We show that inconsistencies in these predictions help discovering features that the model is unable to recognise in the unlabelled instances.
arXiv Detail & Related papers (2022-03-14T12:20:54Z) - A Lagrangian Duality Approach to Active Learning [119.36233726867992]
We consider the batch active learning problem, where only a subset of the training data is labeled.
We formulate the learning problem using constrained optimization, where each constraint bounds the performance of the model on labeled samples.
We show, via numerical experiments, that our proposed approach performs similarly to or better than state-of-the-art active learning methods.
arXiv Detail & Related papers (2022-02-08T19:18:49Z) - Active learning for reducing labeling effort in text classification
tasks [3.8424737607413153]
Active learning (AL) is a paradigm that aims to reduce labeling effort by only using the data which the used model deems most informative.
We present an empirical study that compares different uncertainty-based algorithms BERT$_base$ as the used classifiers.
Our results show that using uncertainty-based AL with BERT$base$ outperforms random sampling of data.
arXiv Detail & Related papers (2021-09-10T13:00:36Z) - Cartography Active Learning [12.701925701095968]
We propose Cartography Active Learning (CAL), a novel Active Learning (AL) algorithm.
CAL exploits the behavior of the model on individual instances during training as a proxy to find the most informative instances for labeling.
Our results show that CAL results in a more data-efficient learning strategy, achieving comparable or better results with considerably less training data.
arXiv Detail & Related papers (2021-09-09T14:02:02Z) - DEAL: Deep Evidential Active Learning for Image Classification [0.0]
Active Learning (AL) is one approach to mitigate the problem of limited labeled data.
Recent AL methods for CNNs propose different solutions for the selection of instances to be labeled.
We propose a novel AL algorithm that efficiently learns from unlabeled data by capturing high prediction uncertainty.
arXiv Detail & Related papers (2020-07-22T11:14:23Z) - Semi-Supervised Learning with Meta-Gradient [123.26748223837802]
We propose a simple yet effective meta-learning algorithm in semi-supervised learning.
We find that the proposed algorithm performs favorably against state-of-the-art methods.
arXiv Detail & Related papers (2020-07-08T08:48:56Z) - Heuristic Semi-Supervised Learning for Graph Generation Inspired by
Electoral College [80.67842220664231]
We propose a novel pre-processing technique, namely ELectoral COllege (ELCO), which automatically expands new nodes and edges to refine the label similarity within a dense subgraph.
In all setups tested, our method boosts the average score of base models by a large margin of 4.7 points, as well as consistently outperforms the state-of-the-art.
arXiv Detail & Related papers (2020-06-10T14:48:48Z) - Fase-AL -- Adaptation of Fast Adaptive Stacking of Ensembles for
Supporting Active Learning [0.0]
This work presents the FASE-AL algorithm which induces classification models with non-labeled instances using Active Learning.
The algorithm achieves promising results in terms of the percentage of correctly classified instances.
arXiv Detail & Related papers (2020-01-30T17:25:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.