A Survey of Ontology Expansion for Conversational Understanding
- URL: http://arxiv.org/abs/2410.15019v1
- Date: Sat, 19 Oct 2024 07:27:30 GMT
- Title: A Survey of Ontology Expansion for Conversational Understanding
- Authors: Jinggui Liang, Yuxia Wu, Yuan Fang, Hao Fei, Lizi Liao,
- Abstract summary: This survey paper provides a comprehensive review of the state-of-the-art techniques in OnExp for conversational understanding.
It categorizes the existing literature into three main areas: (1) New Discovery, (2) New Slot-Value Discovery, and (3) Joint OnExp.
- Score: 25.39780882479585
- License:
- Abstract: In the rapidly evolving field of conversational AI, Ontology Expansion (OnExp) is crucial for enhancing the adaptability and robustness of conversational agents. Traditional models rely on static, predefined ontologies, limiting their ability to handle new and unforeseen user needs. This survey paper provides a comprehensive review of the state-of-the-art techniques in OnExp for conversational understanding. It categorizes the existing literature into three main areas: (1) New Intent Discovery, (2) New Slot-Value Discovery, and (3) Joint OnExp. By examining the methodologies, benchmarks, and challenges associated with these areas, we highlight several emerging frontiers in OnExp to improve agent performance in real-world scenarios and discuss their corresponding challenges. This survey aspires to be a foundational reference for researchers and practitioners, promoting further exploration and innovation in this crucial domain.
Related papers
- The What, Why, and How of Context Length Extension Techniques in Large
Language Models -- A Detailed Survey [6.516561905186376]
The advent of Large Language Models (LLMs) represents a notable breakthrough in Natural Language Processing (NLP)
We study the inherent challenges associated with extending context length and present an organized overview of the existing strategies employed by researchers.
We explore whether there is a consensus within the research community regarding evaluation standards and identify areas where further agreement is needed.
arXiv Detail & Related papers (2024-01-15T18:07:21Z) - Trends in Integration of Knowledge and Large Language Models: A Survey and Taxonomy of Methods, Benchmarks, and Applications [41.24492058141363]
Large language models (LLMs) exhibit superior performance on various natural language tasks, but they are susceptible to issues stemming from outdated data and domain-specific limitations.
We propose a review to discuss the trends in integration of knowledge and large language models, including taxonomy of methods, benchmarks, and applications.
arXiv Detail & Related papers (2023-11-10T05:24:04Z) - Recent Advances in Direct Speech-to-text Translation [58.692782919570845]
We categorize the existing research work into three directions based on the main challenges -- modeling burden, data scarcity, and application issues.
For the challenge of data scarcity, recent work resorts to many sophisticated techniques, such as data augmentation, pre-training, knowledge distillation, and multilingual modeling.
We analyze and summarize the application issues, which include real-time, segmentation, named entity, gender bias, and code-switching.
arXiv Detail & Related papers (2023-06-20T16:14:27Z) - A Comprehensive Survey on Relation Extraction: Recent Advances and New Frontiers [76.51245425667845]
Relation extraction (RE) involves identifying the relations between entities from underlying content.
Deep neural networks have dominated the field of RE and made noticeable progress.
This survey is expected to facilitate researchers' collaborative efforts to address the challenges of real-world RE systems.
arXiv Detail & Related papers (2023-06-03T08:39:25Z) - Robust Saliency-Aware Distillation for Few-shot Fine-grained Visual
Recognition [57.08108545219043]
Recognizing novel sub-categories with scarce samples is an essential and challenging research topic in computer vision.
Existing literature addresses this challenge by employing local-based representation approaches.
This article proposes a novel model, Robust Saliency-aware Distillation (RSaD), for few-shot fine-grained visual recognition.
arXiv Detail & Related papers (2023-05-12T00:13:17Z) - Knowledge-enhanced Neural Machine Reasoning: A Review [67.51157900655207]
We introduce a novel taxonomy that categorizes existing knowledge-enhanced methods into two primary categories and four subcategories.
We elucidate the current application domains and provide insight into promising prospects for future research.
arXiv Detail & Related papers (2023-02-04T04:54:30Z) - Parsing Objects at a Finer Granularity: A Survey [54.72819146263311]
Fine-grained visual parsing is important in many real-world applications, e.g., agriculture, remote sensing, and space technologies.
Predominant research efforts tackle these fine-grained sub-tasks following different paradigms.
We conduct an in-depth study of the advanced work from a new perspective of learning the part relationship.
arXiv Detail & Related papers (2022-12-28T04:20:10Z) - Weakly Supervised Object Localization and Detection: A Survey [145.5041117184952]
weakly supervised object localization and detection plays an important role for developing new generation computer vision systems.
We review (1) classic models, (2) approaches with feature representations from off-the-shelf deep networks, (3) approaches solely based on deep learning, and (4) publicly available datasets and standard evaluation metrics that are widely used in this field.
We discuss the key challenges in this field, development history of this field, advantages/disadvantages of the methods in each category, relationships between methods in different categories, applications of the weakly supervised object localization and detection methods, and potential future directions to further promote the development of this research field
arXiv Detail & Related papers (2021-04-16T06:44:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.