A Survey on Spoken Language Understanding: Recent Advances and New
Frontiers
- URL: http://arxiv.org/abs/2103.03095v1
- Date: Thu, 4 Mar 2021 15:22:00 GMT
- Title: A Survey on Spoken Language Understanding: Recent Advances and New
Frontiers
- Authors: Libo Qin, Tianbao Xie, Wanxiang Che, Ting Liu
- Abstract summary: Spoken Language Understanding (SLU) aims to extract the semantics frame of user queries.
With the burst of deep neural networks and the evolution of pre-trained language models, the research of SLU has obtained significant breakthroughs.
- Score: 35.59678070422133
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spoken Language Understanding (SLU) aims to extract the semantics frame of
user queries, which is a core component in a task-oriented dialog system. With
the burst of deep neural networks and the evolution of pre-trained language
models, the research of SLU has obtained significant breakthroughs. However,
there remains a lack of a comprehensive survey summarizing existing approaches
and recent trends, which motivated the work presented in this article. In this
paper, we survey recent advances and new frontiers in SLU. Specifically, we
give a thorough review of this research field, covering different aspects
including (1) new taxonomy: we provide a new perspective for SLU filed,
including single model vs. joint model, implicit joint modeling vs. explicit
joint modeling in joint model, non pre-trained paradigm vs. pre-trained
paradigm;(2) new frontiers: some emerging areas in complex SLU as well as the
corresponding challenges; (3) abundant open-source resources: to help the
community, we have collected, organized the related papers, baseline projects
and leaderboard on a public website where SLU researchers could directly access
to the recent progress. We hope that this survey can shed a light on future
research in SLU field.
Related papers
- Federated Large Language Models: Current Progress and Future Directions [63.68614548512534]
This paper surveys Federated learning for LLMs (FedLLM), highlighting recent advances and future directions.
We focus on two key aspects: fine-tuning and prompt learning in a federated setting, discussing existing work and associated research challenges.
arXiv Detail & Related papers (2024-09-24T04:14:33Z) - Retrieval-Enhanced Machine Learning: Synthesis and Opportunities [60.34182805429511]
Retrieval-enhancement can be extended to a broader spectrum of machine learning (ML)
This work introduces a formal framework of this paradigm, Retrieval-Enhanced Machine Learning (REML), by synthesizing the literature in various domains in ML with consistent notations which is missing from the current literature.
The goal of this work is to equip researchers across various disciplines with a comprehensive, formally structured framework of retrieval-enhanced models, thereby fostering interdisciplinary future research.
arXiv Detail & Related papers (2024-07-17T20:01:21Z) - Vertical Federated Learning for Effectiveness, Security, Applicability: A Survey [67.48187503803847]
Vertical Federated Learning (VFL) is a privacy-preserving distributed learning paradigm.
Recent research has shown promising results addressing various challenges in VFL.
This survey offers a systematic overview of recent developments.
arXiv Detail & Related papers (2024-05-25T16:05:06Z) - ChatGPT Alternative Solutions: Large Language Models Survey [0.0]
Large Language Models (LLMs) have ignited a surge in research contributions within this domain.
Recent years have witnessed a dynamic synergy between academia and industry, propelling the field of LLM research to new heights.
This survey furnishes a well-rounded perspective on the current state of generative AI, shedding light on opportunities for further exploration, enhancement, and innovation.
arXiv Detail & Related papers (2024-03-21T15:16:50Z) - Recent Advances in Hate Speech Moderation: Multimodality and the Role of Large Models [52.24001776263608]
This comprehensive survey delves into the recent strides in HS moderation.
We highlight the burgeoning role of large language models (LLMs) and large multimodal models (LMMs)
We identify existing gaps in research, particularly in the context of underrepresented languages and cultures.
arXiv Detail & Related papers (2024-01-30T03:51:44Z) - Federated Learning for Generalization, Robustness, Fairness: A Survey
and Benchmark [55.898771405172155]
Federated learning has emerged as a promising paradigm for privacy-preserving collaboration among different parties.
We provide a systematic overview of the important and recent developments of research on federated learning.
arXiv Detail & Related papers (2023-11-12T06:32:30Z) - Knowledge Enhanced Pretrained Language Models: A Compreshensive Survey [8.427521246916463]
Pretrained Language Models (PLM) have established a new paradigm through learning informative representations on large-scale text corpus.
This new paradigm has revolutionized the entire field of natural language processing, and set the new state-of-the-art performance for a wide variety of NLP tasks.
To address this issue, integrating knowledge into PLMs have recently become a very active research area and a variety of approaches have been developed.
arXiv Detail & Related papers (2021-10-16T03:27:56Z) - A Joint and Domain-Adaptive Approach to Spoken Language Understanding [30.164751046395573]
Spoken Language Understanding (SLU) is composed of two subtasks: intent detection (ID) and slot filling (SF)
One jointly tackles these two subtasks to improve their prediction accuracy, and the other focuses on the domain-adaptation ability of one of the subtasks.
In this paper, we propose a joint and domain adaptive approach to SLU.
arXiv Detail & Related papers (2021-07-25T09:38:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.