A Comprehensive Review of Sign Language Recognition: Different Types,
Modalities, and Datasets
- URL: http://arxiv.org/abs/2204.03328v1
- Date: Thu, 7 Apr 2022 09:49:12 GMT
- Title: A Comprehensive Review of Sign Language Recognition: Different Types,
Modalities, and Datasets
- Authors: Dr. M. Madhiarasan and Prof. Partha Pratim Roy
- Abstract summary: SLR usage has increased in many applications, but the environment, background image resolution, modalities, and datasets affect the performance a lot.
This review paper facilitates a comprehensive overview of SLR and discusses the needs, challenges, and problems associated with SLR.
Research progress and existing state-of-the-art SLR models over the past decade have been reviewed.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A machine can understand human activities, and the meaning of signs can help
overcome the communication barriers between the inaudible and ordinary people.
Sign Language Recognition (SLR) is a fascinating research area and a crucial
task concerning computer vision and pattern recognition. Recently, SLR usage
has increased in many applications, but the environment, background image
resolution, modalities, and datasets affect the performance a lot. Many
researchers have been striving to carry out generic real-time SLR models. This
review paper facilitates a comprehensive overview of SLR and discusses the
needs, challenges, and problems associated with SLR. We study related works
about manual and non-manual, various modalities, and datasets. Research
progress and existing state-of-the-art SLR models over the past decade have
been reviewed. Finally, we find the research gap and limitations in this domain
and suggest future directions. This review paper will be helpful for readers
and researchers to get complete guidance about SLR and the progressive design
of the state-of-the-art SLR model
Related papers
- From Linguistic Giants to Sensory Maestros: A Survey on Cross-Modal Reasoning with Large Language Models [56.9134620424985]
Cross-modal reasoning (CMR) is increasingly recognized as a crucial capability in the progression toward more sophisticated artificial intelligence systems.
The recent trend of deploying Large Language Models (LLMs) to tackle CMR tasks has marked a new mainstream of approaches for enhancing their effectiveness.
This survey offers a nuanced exposition of current methodologies applied in CMR using LLMs, classifying these into a detailed three-tiered taxonomy.
arXiv Detail & Related papers (2024-09-19T02:51:54Z) - Retrieval-Enhanced Machine Learning: Synthesis and Opportunities [60.34182805429511]
Retrieval-enhancement can be extended to a broader spectrum of machine learning (ML)
This work introduces a formal framework of this paradigm, Retrieval-Enhanced Machine Learning (REML), by synthesizing the literature in various domains in ML with consistent notations which is missing from the current literature.
The goal of this work is to equip researchers across various disciplines with a comprehensive, formally structured framework of retrieval-enhanced models, thereby fostering interdisciplinary future research.
arXiv Detail & Related papers (2024-07-17T20:01:21Z) - Self-assessment, Exhibition, and Recognition: a Review of Personality in Large Language Models [29.086329448754412]
We present a comprehensive review by categorizing current studies into three research problems: self-assessment, exhibition, and recognition.
Our paper is the first comprehensive survey of up-to-date literature on personality in large language models.
arXiv Detail & Related papers (2024-06-25T15:08:44Z) - RAG and RAU: A Survey on Retrieval-Augmented Language Model in Natural Language Processing [0.2302001830524133]
This survey paper addresses the absence of a comprehensive overview on Retrieval-Augmented Language Models (RALMs)
The paper discusses the essential components of RALMs, including Retrievers, Language Models, and Augmentations.
RALMs demonstrate utility in a spectrum of tasks, from translation and dialogue systems to knowledge-intensive applications.
arXiv Detail & Related papers (2024-04-30T13:14:51Z) - Rethinking Interpretability in the Era of Large Language Models [76.1947554386879]
Large language models (LLMs) have demonstrated remarkable capabilities across a wide array of tasks.
The capability to explain in natural language allows LLMs to expand the scale and complexity of patterns that can be given to a human.
These new capabilities raise new challenges, such as hallucinated explanations and immense computational costs.
arXiv Detail & Related papers (2024-01-30T17:38:54Z) - Recent Advances in Hate Speech Moderation: Multimodality and the Role of Large Models [52.24001776263608]
This comprehensive survey delves into the recent strides in HS moderation.
We highlight the burgeoning role of large language models (LLMs) and large multimodal models (LMMs)
We identify existing gaps in research, particularly in the context of underrepresented languages and cultures.
arXiv Detail & Related papers (2024-01-30T03:51:44Z) - A Comprehensive Overview of Large Language Models [68.22178313875618]
Large Language Models (LLMs) have recently demonstrated remarkable capabilities in natural language processing tasks.
This article provides an overview of the existing literature on a broad range of LLM-related concepts.
arXiv Detail & Related papers (2023-07-12T20:01:52Z) - How Many Papers Should You Review? A Research Synthesis of Systematic
Literature Reviews in Software Engineering [5.6292136785289175]
We aim to provide more understanding of when an SLR in Software Engineering should be conducted.
A research synthesis was conducted on a sample of 170 SLRs published in top-tier SE journals.
The results of our study can be used by SE researchers as an indicator or benchmark to understand whether an SLR is conducted at a good time.
arXiv Detail & Related papers (2023-07-12T10:18:58Z) - Sentiment Analysis in the Era of Large Language Models: A Reality Check [69.97942065617664]
This paper investigates the capabilities of large language models (LLMs) in performing various sentiment analysis tasks.
We evaluate performance across 13 tasks on 26 datasets and compare the results against small language models (SLMs) trained on domain-specific datasets.
arXiv Detail & Related papers (2023-05-24T10:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.