Offline Handwritten Amharic Character Recognition Using Few-shot
Learning
- URL: http://arxiv.org/abs/2210.00275v1
- Date: Sat, 1 Oct 2022 13:16:18 GMT
- Title: Offline Handwritten Amharic Character Recognition Using Few-shot
Learning
- Authors: Mesay Samuel, Lars Schmidt-Thieme, DP Sharma, Abiot Sinamo, Abey Bruck
- Abstract summary: offline handwritten Amharic character recognition using few-shot learning is addressed.
Using the opportunities explored in the nature of Amharic alphabet having row-wise and column-wise similarities, a novel way of augmenting the training episodes is proposed.
- Score: 4.243592852049962
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Few-shot learning is an important, but challenging problem of machine
learning aimed at learning from only fewer labeled training examples. It has
become an active area of research due to deep learning requiring huge amounts
of labeled dataset, which is not feasible in the real world. Learning from a
few examples is also an important attempt towards learning like humans.
Few-shot learning has proven a very good promise in different areas of machine
learning applications, particularly in image classification. As it is a recent
technique, most researchers focus on understanding and solving the issues
related to its concept by focusing only on common image datasets like
Mini-ImageNet and Omniglot. Few-shot learning also opens an opportunity to
address low resource languages like Amharic. In this study, offline handwritten
Amharic character recognition using few-shot learning is addressed.
Particularly, prototypical networks, the popular and simpler type of few-shot
learning, is implemented as a baseline. Using the opportunities explored in the
nature of Amharic alphabet having row-wise and column-wise similarities, a
novel way of augmenting the training episodes is proposed. The experimental
results show that the proposed method outperformed the baseline method. This
study has implemented few-shot learning for Amharic characters for the first
time. More importantly, the findings of the study open new ways of examining
the influence of training episodes in few-shot learning, which is one of the
important issues that needs exploration. The datasets used for this study are
collected from native Amharic language writers using an Android App developed
as a part of this study.
Related papers
- Less is More: A Closer Look at Semantic-based Few-Shot Learning [11.724194320966959]
Few-shot Learning aims to learn and distinguish new categories with a very limited number of available images.
We propose a simple but effective framework for few-shot learning tasks, specifically designed to exploit the textual information and language model.
Our experiments conducted across four widely used few-shot datasets demonstrate that our simple framework achieves impressive results.
arXiv Detail & Related papers (2024-01-10T08:56:02Z) - Mixture of Self-Supervised Learning [2.191505742658975]
Self-supervised learning works by using a pretext task which will be trained on the model before being applied to a specific task.
Previous studies have only used one type of transformation as a pretext task.
This raises the question of how it affects if more than one pretext task is used and to use a gating network to combine all pretext tasks.
arXiv Detail & Related papers (2023-07-27T14:38:32Z) - Towards Open Vocabulary Learning: A Survey [146.90188069113213]
Deep neural networks have made impressive advancements in various core tasks like segmentation, tracking, and detection.
Recently, open vocabulary settings were proposed due to the rapid progress of vision language pre-training.
This paper provides a thorough review of open vocabulary learning, summarizing and analyzing recent developments in the field.
arXiv Detail & Related papers (2023-06-28T02:33:06Z) - Brief Introduction to Contrastive Learning Pretext Tasks for Visual
Representation [0.0]
We introduce contrastive learning, a subset of unsupervised learning methods.
The purpose of contrastive learning is to embed augmented samples from the same sample near to each other while pushing away those that are not.
We offer some strategies from contrastive learning that have recently been published and are focused on pretext tasks for visual representation.
arXiv Detail & Related papers (2022-10-06T18:54:10Z) - Self-Supervised Speech Representation Learning: A Review [105.1545308184483]
Self-supervised representation learning methods promise a single universal model that would benefit a wide variety of tasks and domains.
Speech representation learning is experiencing similar progress in three main categories: generative, contrastive, and predictive methods.
This review presents approaches for self-supervised speech representation learning and their connection to other research areas.
arXiv Detail & Related papers (2022-05-21T16:52:57Z) - Unified Contrastive Learning in Image-Text-Label Space [130.31947133453406]
Unified Contrastive Learning (UniCL) is effective way of learning semantically rich yet discriminative representations.
UniCL stand-alone is a good learner on pure imagelabel data, rivaling the supervised learning methods across three image classification datasets.
arXiv Detail & Related papers (2022-04-07T17:34:51Z) - Learning from Few Examples: A Summary of Approaches to Few-Shot Learning [3.6930948691311016]
Few-Shot Learning refers to the problem of learning the underlying pattern in the data just from a few training samples.
Deep learning solutions suffer from data hunger and extensively high computation time and resources.
Few-shot learning that could drastically reduce the turnaround time of building machine learning applications emerges as a low-cost solution.
arXiv Detail & Related papers (2022-03-07T23:15:21Z) - Budget-aware Few-shot Learning via Graph Convolutional Network [56.41899553037247]
This paper tackles the problem of few-shot learning, which aims to learn new visual concepts from a few examples.
A common problem setting in few-shot classification assumes random sampling strategy in acquiring data labels.
We introduce a new budget-aware few-shot learning problem that aims to learn novel object categories.
arXiv Detail & Related papers (2022-01-07T02:46:35Z) - LibFewShot: A Comprehensive Library for Few-shot Learning [78.58842209282724]
Few-shot learning, especially few-shot image classification, has received increasing attention and witnessed significant advances in recent years.
Some recent studies implicitly show that many generic techniques or tricks, such as data augmentation, pre-training, knowledge distillation, and self-supervision, may greatly boost the performance of a few-shot learning method.
We propose a comprehensive library for few-shot learning (LibFewShot) by re-implementing seventeen state-of-the-art few-shot learning methods in a unified framework with the same single intrinsic in PyTorch.
arXiv Detail & Related papers (2021-09-10T14:12:37Z) - Few-Cost Salient Object Detection with Adversarial-Paced Learning [95.0220555274653]
This paper proposes to learn the effective salient object detection model based on the manual annotation on a few training images only.
We name this task as the few-cost salient object detection and propose an adversarial-paced learning (APL)-based framework to facilitate the few-cost learning scenario.
arXiv Detail & Related papers (2021-04-05T14:15:49Z) - An Overview of Deep Learning Architectures in Few-Shot Learning Domain [0.0]
Few-Shot Learning (also known as one-shot learning) is a sub-field of machine learning that aims to create models that can learn the desired objective with less data.
We have reviewed some of the well-known deep learning-based approaches towards few-shot learning.
arXiv Detail & Related papers (2020-08-12T06:58:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.