Explainability in Practice: A Survey of Explainable NLP Across Various Domains
- URL: http://arxiv.org/abs/2502.00837v1
- Date: Sun, 02 Feb 2025 16:18:44 GMT
- Title: Explainability in Practice: A Survey of Explainable NLP Across Various Domains
- Authors: Hadi Mohammadi, Ayoub Bagheri, Anastasia Giachanou, Daniel L. Oberski,
- Abstract summary: Review explores explainable NLP (XNLP) with a focus on its practical deployment and real-world applications.
The paper concludes by suggesting future research directions that could enhance the understanding and broader application of XNLP.
- Score: 2.494550479408289
- License:
- Abstract: Natural Language Processing (NLP) has become a cornerstone in many critical sectors, including healthcare, finance, and customer relationship management. This is especially true with the development and use of advanced models such as GPT-based architectures and BERT, which are widely used in decision-making processes. However, the black-box nature of these advanced NLP models has created an urgent need for transparency and explainability. This review explores explainable NLP (XNLP) with a focus on its practical deployment and real-world applications, examining its implementation and the challenges faced in domain-specific contexts. The paper underscores the importance of explainability in NLP and provides a comprehensive perspective on how XNLP can be designed to meet the unique demands of various sectors, from healthcare's need for clear insights to finance's emphasis on fraud detection and risk assessment. Additionally, this review aims to bridge the knowledge gap in XNLP literature by offering a domain-specific exploration and discussing underrepresented areas such as real-world applicability, metric evaluation, and the role of human interaction in model assessment. The paper concludes by suggesting future research directions that could enhance the understanding and broader application of XNLP.
Related papers
- Pragmatics in the Era of Large Language Models: A Survey on Datasets, Evaluation, Opportunities and Challenges [26.846627984383836]
We provide a review of resources designed for evaluating pragmatic capabilities in NLP.
We analyze task designs, data collection methods, evaluation approaches, and their relevance to real-world applications.
Our survey aims to clarify the landscape of pragmatic evaluation and guide the development of more comprehensive and targeted benchmarks.
arXiv Detail & Related papers (2025-02-17T23:31:38Z) - On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs [20.589396689900614]
This paper addresses three fundamental questions: Why do we need interpretability, what are we interpreting, and how?
By exploring these questions, we examine existing interpretability paradigms, their properties, and their relevance to different stakeholders.
Our analysis reveals significant disparities between NLP developers and non-developer users, as well as between research fields, underscoring the diverse needs of stakeholders.
arXiv Detail & Related papers (2024-07-27T08:00:27Z) - Leveraging Large Language Models for NLG Evaluation: Advances and Challenges [57.88520765782177]
Large Language Models (LLMs) have opened new avenues for assessing generated content quality, e.g., coherence, creativity, and context relevance.
We propose a coherent taxonomy for organizing existing LLM-based evaluation metrics, offering a structured framework to understand and compare these methods.
By discussing unresolved challenges, including bias, robustness, domain-specificity, and unified evaluation, this paper seeks to offer insights to researchers and advocate for fairer and more advanced NLG evaluation techniques.
arXiv Detail & Related papers (2024-01-13T15:59:09Z) - Practical Guidelines for the Selection and Evaluation of Natural Language Processing Techniques in Requirements Engineering [8.779031107963942]
Natural language (NL) is now a cornerstone of requirements automation.
With so many different NLP solution strategies available, it can be challenging to choose the right strategy for a specific RE task.
In particular, we discuss how to choose among different strategies such as traditional NLP, feature-based machine learning, and language-model-based methods.
arXiv Detail & Related papers (2024-01-03T02:24:35Z) - The Shifted and The Overlooked: A Task-oriented Investigation of
User-GPT Interactions [114.67699010359637]
We analyze a large-scale collection of real user queries to GPT.
We find that tasks such as design'' and planning'' are prevalent in user interactions but are largely neglected or different from traditional NLP benchmarks.
arXiv Detail & Related papers (2023-10-19T02:12:17Z) - Uncertainty in Natural Language Processing: Sources, Quantification, and
Applications [56.130945359053776]
We provide a comprehensive review of uncertainty-relevant works in the NLP field.
We first categorize the sources of uncertainty in natural language into three types, including input, system, and output.
We discuss the challenges of uncertainty estimation in NLP and discuss potential future directions.
arXiv Detail & Related papers (2023-06-05T06:46:53Z) - Interactive Natural Language Processing [67.87925315773924]
Interactive Natural Language Processing (iNLP) has emerged as a novel paradigm within the field of NLP.
This paper offers a comprehensive survey of iNLP, starting by proposing a unified definition and framework of the concept.
arXiv Detail & Related papers (2023-05-22T17:18:29Z) - A Review of the Trends and Challenges in Adopting Natural Language
Processing Methods for Education Feedback Analysis [4.040584701067227]
Machine learning, deep learning, and natural language processing (NLP) are subsets of AI to tackle different areas of data processing and modelling.
This review article presents an overview of AI impact on education outlining with current opportunities.
arXiv Detail & Related papers (2023-01-20T23:38:58Z) - An Inclusive Notion of Text [69.36678873492373]
We argue that clarity on the notion of text is crucial for reproducible and generalizable NLP.
We introduce a two-tier taxonomy of linguistic and non-linguistic elements that are available in textual sources and can be used in NLP modeling.
arXiv Detail & Related papers (2022-11-10T14:26:43Z) - Meta Learning for Natural Language Processing: A Survey [88.58260839196019]
Deep learning has been the mainstream technique in natural language processing (NLP) area.
Deep learning requires many labeled data and is less generalizable across domains.
Meta-learning is an arising field in machine learning studying approaches to learn better algorithms.
arXiv Detail & Related papers (2022-05-03T13:58:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.