A Major Obstacle for NLP Research: Let's Talk about Time Allocation!
- URL: http://arxiv.org/abs/2211.16858v1
- Date: Wed, 30 Nov 2022 10:00:12 GMT
- Title: A Major Obstacle for NLP Research: Let's Talk about Time Allocation!
- Authors: Katharina Kann, Shiran Dudy, Arya D. McCarthy
- Abstract summary: This paper argues that we have been less successful than we should have been in the field of natural language processing.
We demonstrate that, in recent years, subpar time allocation has been a major obstacle for NLP research.
- Score: 25.820755718678786
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The field of natural language processing (NLP) has grown over the last few
years: conferences have become larger, we have published an incredible amount
of papers, and state-of-the-art research has been implemented in a large
variety of customer-facing products. However, this paper argues that we have
been less successful than we should have been and reflects on where and how the
field fails to tap its full potential. Specifically, we demonstrate that, in
recent years, subpar time allocation has been a major obstacle for NLP
research. We outline multiple concrete problems together with their negative
consequences and, importantly, suggest remedies to improve the status quo. We
hope that this paper will be a starting point for discussions around which
common practices are -- or are not -- beneficial for NLP research.
Related papers
- The Nature of NLP: Analyzing Contributions in NLP Papers [77.31665252336157]
We quantitatively investigate what constitutes NLP research by examining research papers.
Our findings reveal a rising involvement of machine learning in NLP since the early nineties.
In post-2020, there has been a resurgence of focus on language and people.
arXiv Detail & Related papers (2024-09-29T01:29:28Z) - From Insights to Actions: The Impact of Interpretability and Analysis Research on NLP [28.942812379900673]
Interpretability and analysis (IA) research is a growing subfield within NLP.
We seek to quantify the impact of IA research on the broader field of NLP.
arXiv Detail & Related papers (2024-06-18T13:45:07Z) - What Can Natural Language Processing Do for Peer Review? [173.8912784451817]
In modern science, peer review is widely used, yet it is hard, time-consuming, and prone to error.
Since the artifacts involved in peer review are largely text-based, Natural Language Processing has great potential to improve reviewing.
We detail each step of the process from manuscript submission to camera-ready revision, and discuss the associated challenges and opportunities for NLP assistance.
arXiv Detail & Related papers (2024-05-10T16:06:43Z) - A short review of the main concerns in A.I. development and application
within the public sector supported by NLP and TM [0.0]
This work reviewed research papers published in ACM Digital Library and IEEE Xplore conference proceedings.
The objective was to capture insights regarding data privacy, ethics, interpretability, explainability, trustworthiness, and fairness in the public sector.
arXiv Detail & Related papers (2023-07-25T11:15:57Z) - Surveying (Dis)Parities and Concerns of Compute Hungry NLP Research [75.84463664853125]
We provide a first attempt to quantify concerns regarding three topics, namely, environmental impact, equity, and impact on peer reviewing.
We capture existing (dis)parities between different and within groups with respect to seniority, academia, and industry.
We devise recommendations to mitigate found disparities, some of which already successfully implemented.
arXiv Detail & Related papers (2023-06-29T12:44:53Z) - Beyond Good Intentions: Reporting the Research Landscape of NLP for
Social Good [115.1507728564964]
We introduce NLP4SG Papers, a scientific dataset with three associated tasks.
These tasks help identify NLP4SG papers and characterize the NLP4SG landscape.
We use state-of-the-art NLP models to address each of these tasks and use them on the entire ACL Anthology.
arXiv Detail & Related papers (2023-05-09T14:16:25Z) - The Elephant in the Room: Analyzing the Presence of Big Tech in Natural Language Processing Research [28.382353702576314]
We use a corpus with comprehensive metadata of 78,187 NLP publications and 701 resumes of NLP publication authors.
We find that industry presence among NLP authors has been steady before a steep increase over the past five years.
A few companies account for most of the publications and provide funding to academic researchers through grants and internships.
arXiv Detail & Related papers (2023-05-04T12:57:18Z) - Geographic Citation Gaps in NLP Research [63.13508571014673]
This work asks a series of questions on the relationship between geographical location and publication success.
We first created a dataset of 70,000 papers from the ACL Anthology, extracted their meta-information, and generated their citation network.
We show that not only are there substantial geographical disparities in paper acceptance and citation but also that these disparities persist even when controlling for a number of variables such as venue of publication and sub-field of NLP.
arXiv Detail & Related papers (2022-10-26T02:25:23Z) - What Can We Do to Improve Peer Review in NLP? [69.11622020605431]
We argue that a part of the problem is that the reviewers and area chairs face a poorly defined task forcing apples-to-oranges comparisons.
There are several potential ways forward, but the key difficulty is creating the incentives and mechanisms for their consistent implementation in the NLP community.
arXiv Detail & Related papers (2020-10-08T09:32:21Z) - Give Me Convenience and Give Her Death: Who Should Decide What Uses of
NLP are Appropriate, and on What Basis? [40.75458460149429]
We discuss whether datasets and tasks should be deemed off-limits for NLP research.
We focus in particular on the role of data statements in ethically assessing research.
We examine the outcomes of similar debates in other scientific disciplines.
arXiv Detail & Related papers (2020-05-27T07:31:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.