To Build Our Future, We Must Know Our Past: Contextualizing Paradigm
Shifts in Natural Language Processing
- URL: http://arxiv.org/abs/2310.07715v1
- Date: Wed, 11 Oct 2023 17:59:36 GMT
- Title: To Build Our Future, We Must Know Our Past: Contextualizing Paradigm
Shifts in Natural Language Processing
- Authors: Sireesh Gururaja, Amanda Bertsch, Clara Na, David Gray Widder, Emma
Strubell
- Abstract summary: We study factors that shape NLP as a field, including culture, incentives, and infrastructure.
Our interviewees identify cyclical patterns in the field, as well as new shifts without historical parallel.
We conclude by discussing shared visions, concerns, and hopes for the future of NLP.
- Score: 14.15370310437262
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: NLP is in a period of disruptive change that is impacting our methodologies,
funding sources, and public perception. In this work, we seek to understand how
to shape our future by better understanding our past. We study factors that
shape NLP as a field, including culture, incentives, and infrastructure by
conducting long-form interviews with 26 NLP researchers of varying seniority,
research area, institution, and social identity. Our interviewees identify
cyclical patterns in the field, as well as new shifts without historical
parallel, including changes in benchmark culture and software infrastructure.
We complement this discussion with quantitative analysis of citation,
authorship, and language use in the ACL Anthology over time. We conclude by
discussing shared visions, concerns, and hopes for the future of NLP. We hope
that this study of our field's past and present can prompt informed discussion
of our community's implicit norms and more deliberate action to consciously
shape the future.
Related papers
- The Nature of NLP: Analyzing Contributions in NLP Papers [77.31665252336157]
We quantitatively investigate what constitutes NLP research by examining research papers.
Our findings reveal a rising involvement of machine learning in NLP since the early nineties.
In post-2020, there has been a resurgence of focus on language and people.
arXiv Detail & Related papers (2024-09-29T01:29:28Z) - On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs [20.589396689900614]
This paper addresses three fundamental questions: Why do we need interpretability, what are we interpreting, and how?
By exploring these questions, we examine existing interpretability paradigms, their properties, and their relevance to different stakeholders.
Our analysis reveals significant disparities between NLP developers and non-developer users, as well as between research fields, underscoring the diverse needs of stakeholders.
arXiv Detail & Related papers (2024-07-27T08:00:27Z) - The Call for Socially Aware Language Technologies [94.6762219597438]
We argue that many of these issues share a common core: a lack of awareness of the factors, context, and implications of the social environment in which NLP operates.
We argue that substantial challenges remain for NLP to develop social awareness and that we are just at the beginning of a new era for the field.
arXiv Detail & Related papers (2024-05-03T18:12:39Z) - The What, Why, and How of Context Length Extension Techniques in Large
Language Models -- A Detailed Survey [6.516561905186376]
The advent of Large Language Models (LLMs) represents a notable breakthrough in Natural Language Processing (NLP)
We study the inherent challenges associated with extending context length and present an organized overview of the existing strategies employed by researchers.
We explore whether there is a consensus within the research community regarding evaluation standards and identify areas where further agreement is needed.
arXiv Detail & Related papers (2024-01-15T18:07:21Z) - Perspectives on the State and Future of Deep Learning -- 2023 [237.1458929375047]
The goal of this series is to chronicle opinions and issues in the field of machine learning as they stand today and as they change over time.
The plan is to host this survey periodically until the AI singularity paperclip-frenzy-driven doomsday, keeping an updated list of topical questions and interviewing new community members for each edition.
arXiv Detail & Related papers (2023-12-07T19:58:37Z) - Federated Learning for Generalization, Robustness, Fairness: A Survey
and Benchmark [55.898771405172155]
Federated learning has emerged as a promising paradigm for privacy-preserving collaboration among different parties.
We provide a systematic overview of the important and recent developments of research on federated learning.
arXiv Detail & Related papers (2023-11-12T06:32:30Z) - Interactive Natural Language Processing [67.87925315773924]
Interactive Natural Language Processing (iNLP) has emerged as a novel paradigm within the field of NLP.
This paper offers a comprehensive survey of iNLP, starting by proposing a unified definition and framework of the concept.
arXiv Detail & Related papers (2023-05-22T17:18:29Z) - An Inclusive Notion of Text [69.36678873492373]
We argue that clarity on the notion of text is crucial for reproducible and generalizable NLP.
We introduce a two-tier taxonomy of linguistic and non-linguistic elements that are available in textual sources and can be used in NLP modeling.
arXiv Detail & Related papers (2022-11-10T14:26:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.