Translational NLP: A New Paradigm and General Principles for Natural
Language Processing Research
- URL: http://arxiv.org/abs/2104.07874v1
- Date: Fri, 16 Apr 2021 03:46:10 GMT
- Title: Translational NLP: A New Paradigm and General Principles for Natural
Language Processing Research
- Authors: Denis Newman-Griffis, Jill Fain Lehman, Carolyn Ros\'e, Harry
Hochheiser
- Abstract summary: Natural language processing (NLP) research combines the study of universal principles, through basic science, with applied science targeting specific use cases and settings.
We describe a new paradigm of Translational NLP, which aims to structure and facilitate the processes by which basic and applied NLP research inform one another.
- Score: 2.4026906978773144
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Natural language processing (NLP) research combines the study of universal
principles, through basic science, with applied science targeting specific use
cases and settings. However, the process of exchange between basic NLP and
applications is often assumed to emerge naturally, resulting in many
innovations going unapplied and many important questions left unstudied. We
describe a new paradigm of Translational NLP, which aims to structure and
facilitate the processes by which basic and applied NLP research inform one
another. Translational NLP thus presents a third research paradigm, focused on
understanding the challenges posed by application needs and how these
challenges can drive innovation in basic science and technology design. We show
that many significant advances in NLP research have emerged from the
intersection of basic principles with application needs, and present a
conceptual framework outlining the stakeholders and key questions in
translational research. Our framework provides a roadmap for developing
Translational NLP as a dedicated research area, and identifies general
translational principles to facilitate exchange between basic and applied
research.
Related papers
- The Nature of NLP: Analyzing Contributions in NLP Papers [77.31665252336157]
We quantitatively investigate what constitutes NLP research by examining research papers.
Our findings reveal a rising involvement of machine learning in NLP since the early nineties.
In post-2020, there has been a resurgence of focus on language and people.
arXiv Detail & Related papers (2024-09-29T01:29:28Z) - Towards Systematic Monolingual NLP Surveys: GenA of Greek NLP [2.3499129784547663]
This study fills the gap by introducing a method for creating systematic and comprehensive monolingual NLP surveys.
Characterized by a structured search protocol, it can be used to select publications and organize them through a taxonomy of NLP tasks.
By applying our method, we conducted a systematic literature review of Greek NLP from 2012 to 2022.
arXiv Detail & Related papers (2024-07-13T12:01:52Z) - Practical Guidelines for the Selection and Evaluation of Natural Language Processing Techniques in Requirements Engineering [8.779031107963942]
Natural language (NL) is now a cornerstone of requirements automation.
With so many different NLP solution strategies available, it can be challenging to choose the right strategy for a specific RE task.
In particular, we discuss how to choose among different strategies such as traditional NLP, feature-based machine learning, and language-model-based methods.
arXiv Detail & Related papers (2024-01-03T02:24:35Z) - Interactive Natural Language Processing [67.87925315773924]
Interactive Natural Language Processing (iNLP) has emerged as a novel paradigm within the field of NLP.
This paper offers a comprehensive survey of iNLP, starting by proposing a unified definition and framework of the concept.
arXiv Detail & Related papers (2023-05-22T17:18:29Z) - A Diachronic Analysis of Paradigm Shifts in NLP Research: When, How, and
Why? [84.46288849132634]
We propose a systematic framework for analyzing the evolution of research topics in a scientific field using causal discovery and inference techniques.
We define three variables to encompass diverse facets of the evolution of research topics within NLP.
We utilize a causal discovery algorithm to unveil the causal connections among these variables using observational data.
arXiv Detail & Related papers (2023-05-22T11:08:00Z) - An Inclusive Notion of Text [69.36678873492373]
We argue that clarity on the notion of text is crucial for reproducible and generalizable NLP.
We introduce a two-tier taxonomy of linguistic and non-linguistic elements that are available in textual sources and can be used in NLP modeling.
arXiv Detail & Related papers (2022-11-10T14:26:43Z) - Meta Learning for Natural Language Processing: A Survey [88.58260839196019]
Deep learning has been the mainstream technique in natural language processing (NLP) area.
Deep learning requires many labeled data and is less generalizable across domains.
Meta-learning is an arising field in machine learning studying approaches to learn better algorithms.
arXiv Detail & Related papers (2022-05-03T13:58:38Z) - Systematic Inequalities in Language Technology Performance across the
World's Languages [94.65681336393425]
We introduce a framework for estimating the global utility of language technologies.
Our analyses involve the field at large, but also more in-depth studies on both user-facing technologies and more linguistic NLP tasks.
arXiv Detail & Related papers (2021-10-13T14:03:07Z) - Natural Language Processing with Commonsense Knowledge: A Survey [9.634283896785611]
This paper explores the integration of commonsense knowledge into various NLP tasks.
We highlight key methodologies for incorporating commonsense knowledge and their applications across different NLP tasks.
The paper also examines the challenges and emerging trends in enhancing NLP systems with commonsense reasoning.
arXiv Detail & Related papers (2021-08-10T13:25:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.