Reinforcement Learning Approach to Active Learning for Image
Classification
- URL: http://arxiv.org/abs/2108.05595v1
- Date: Thu, 12 Aug 2021 08:34:02 GMT
- Title: Reinforcement Learning Approach to Active Learning for Image
Classification
- Authors: Thorben Werner
- Abstract summary: This thesis works on active learning as one possible solution to reduce the amount of data that needs to be processed by hand.
A newly proposed framework for framing the active learning workflow as a reinforcement learning problem is adapted for image classification.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine Learning requires large amounts of labeled data to fit a model. Many
datasets are already publicly available, nevertheless forcing application
possibilities of machine learning to the domains of those public datasets. The
ever-growing penetration of machine learning algorithms in new application
areas requires solutions for the need for data in those new domains. This
thesis works on active learning as one possible solution to reduce the amount
of data that needs to be processed by hand, by processing only those datapoints
that specifically benefit the training of a strong model for the task. A newly
proposed framework for framing the active learning workflow as a reinforcement
learning problem is adapted for image classification and a series of three
experiments is conducted. Each experiment is evaluated and potential issues
with the approach are outlined. Each following experiment then proposes
improvements to the framework and evaluates their impact. After the last
experiment, a final conclusion is drawn, unfortunately rejecting this work's
hypothesis and outlining that the proposed framework at the moment is not
capable of improving active learning for image classification with a trained
reinforcement learning agent.
Related papers
- regAL: Python Package for Active Learning of Regression Problems [0.0]
Python package regAL allows users to evaluate different active learning strategies for regression problems.
We present our Python package regAL, which allows users to evaluate different active learning strategies for regression problems.
arXiv Detail & Related papers (2024-10-23T14:34:36Z) - Zero-shot Retrieval: Augmenting Pre-trained Models with Search Engines [83.65380507372483]
Large pre-trained models can dramatically reduce the amount of task-specific data required to solve a problem, but they often fail to capture domain-specific nuances out of the box.
This paper shows how to leverage recent advances in NLP and multi-modal learning to augment a pre-trained model with search engine retrieval.
arXiv Detail & Related papers (2023-11-29T05:33:28Z) - ALP: Action-Aware Embodied Learning for Perception [60.64801970249279]
We introduce Action-Aware Embodied Learning for Perception (ALP)
ALP incorporates action information into representation learning through a combination of optimizing a reinforcement learning policy and an inverse dynamics prediction objective.
We show that ALP outperforms existing baselines in several downstream perception tasks.
arXiv Detail & Related papers (2023-06-16T21:51:04Z) - Budget-aware Few-shot Learning via Graph Convolutional Network [56.41899553037247]
This paper tackles the problem of few-shot learning, which aims to learn new visual concepts from a few examples.
A common problem setting in few-shot classification assumes random sampling strategy in acquiring data labels.
We introduce a new budget-aware few-shot learning problem that aims to learn novel object categories.
arXiv Detail & Related papers (2022-01-07T02:46:35Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - Diminishing Uncertainty within the Training Pool: Active Learning for
Medical Image Segmentation [6.3858225352615285]
We explore active learning for the task of segmentation of medical imaging data sets.
We propose three new strategies for active learning: increasing frequency of uncertain data to bias the training data set, using mutual information among the input images as a regularizer and adaptation of Dice log-likelihood for Stein variational gradient descent (SVGD)
The results indicate an improvement in terms of data reduction by achieving full accuracy while only using 22.69 % and 48.85 % of the available data for each dataset, respectively.
arXiv Detail & Related papers (2021-01-07T01:55:48Z) - Active Learning in CNNs via Expected Improvement Maximization [2.0305676256390934]
"Dropout-based IMprOvementS" (DEIMOS) is a flexible and computationally-efficient approach to active learning.
Our results demonstrate that DEIMOS outperforms several existing baselines across multiple regression and classification tasks.
arXiv Detail & Related papers (2020-11-27T22:06:52Z) - Parrot: Data-Driven Behavioral Priors for Reinforcement Learning [79.32403825036792]
We propose a method for pre-training behavioral priors that can capture complex input-output relationships observed in successful trials.
We show how this learned prior can be used for rapidly learning new tasks without impeding the RL agent's ability to try out novel behaviors.
arXiv Detail & Related papers (2020-11-19T18:47:40Z) - Bayesian active learning for production, a systematic study and a
reusable library [85.32971950095742]
In this paper, we analyse the main drawbacks of current active learning techniques.
We do a systematic study on the effects of the most common issues of real-world datasets on the deep active learning process.
We derive two techniques that can speed up the active learning loop such as partial uncertainty sampling and larger query size.
arXiv Detail & Related papers (2020-06-17T14:51:11Z) - Move-to-Data: A new Continual Learning approach with Deep CNNs,
Application for image-class recognition [0.0]
It is necessary to pre-train the model at a "training recording phase" and then adjust it to the new coming data.
We propose a fast continual learning layer at the end of the neuronal network.
arXiv Detail & Related papers (2020-06-12T13:04:58Z) - A survey on domain adaptation theory: learning bounds and theoretical
guarantees [17.71634393160982]
The main objective of this survey is to provide an overview of the state-of-the-art theoretical results in a specific, and arguably the most popular, sub-field of transfer learning.
In this sub-field, the data distribution is assumed to change across the training and the test data, while the learning task remains the same.
We provide a first up-to-date description of existing results related to domain adaptation problem.
arXiv Detail & Related papers (2020-04-24T16:11:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.