What Do NLP Researchers Believe? Results of the NLP Community Metasurvey
- URL: http://arxiv.org/abs/2208.12852v1
- Date: Fri, 26 Aug 2022 19:45:51 GMT
- Title: What Do NLP Researchers Believe? Results of the NLP Community Metasurvey
- Authors: Julian Michael, Ari Holtzman, Alicia Parrish, Aaron Mueller, Alex
Wang, Angelica Chen, Divyam Madaan, Nikita Nangia, Richard Yuanzhe Pang,
Jason Phang, Samuel R. Bowman
- Abstract summary: We present the results of the NLP Community Metasurvey.
The survey elicited opinions on controversial issues.
We find false sociological beliefs where the community's predictions don't match reality.
- Score: 43.763865378178245
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present the results of the NLP Community Metasurvey. Run from May to June
2022, the survey elicited opinions on controversial issues, including industry
influence in the field, concerns about AGI, and ethics. Our results put
concrete numbers to several controversies: For example, respondents are split
almost exactly in half on questions about the importance of artificial general
intelligence, whether language models understand language, and the necessity of
linguistic structure and inductive bias for solving NLP problems. In addition,
the survey posed meta-questions, asking respondents to predict the distribution
of survey responses. This allows us not only to gain insight on the spectrum of
beliefs held by NLP researchers, but also to uncover false sociological beliefs
where the community's predictions don't match reality. We find such mismatches
on a wide range of issues. Among other results, the community greatly
overestimates its own belief in the usefulness of benchmarks and the potential
for scaling to solve real-world problems, while underestimating its own belief
in the importance of linguistic structure, inductive bias, and
interdisciplinary science.
Related papers
- What Can Natural Language Processing Do for Peer Review? [173.8912784451817]
In modern science, peer review is widely used, yet it is hard, time-consuming, and prone to error.
Since the artifacts involved in peer review are largely text-based, Natural Language Processing has great potential to improve reviewing.
We detail each step of the process from manuscript submission to camera-ready revision, and discuss the associated challenges and opportunities for NLP assistance.
arXiv Detail & Related papers (2024-05-10T16:06:43Z) - SOUL: Towards Sentiment and Opinion Understanding of Language [96.74878032417054]
We propose a new task called Sentiment and Opinion Understanding of Language (SOUL)
SOUL aims to evaluate sentiment understanding through two subtasks: Review (RC) and Justification Generation (JG)
arXiv Detail & Related papers (2023-10-27T06:48:48Z) - NLPBench: Evaluating Large Language Models on Solving NLP Problems [41.01588131136101]
Large language models (LLMs) have shown promise in enhancing the capabilities of natural language processing (NLP)
We present a unique benchmarking dataset, NLPBench, comprising 378 college-level NLP questions spanning various NLP topics sourced from Yale University's prior final exams.
Our evaluation, centered on LLMs such as GPT-3.5/4, PaLM-2, and LLAMA-2, incorporates advanced prompting strategies like the chain-of-thought (CoT) and tree-of-thought (ToT)
arXiv Detail & Related papers (2023-09-27T13:02:06Z) - Surveying (Dis)Parities and Concerns of Compute Hungry NLP Research [75.84463664853125]
We provide a first attempt to quantify concerns regarding three topics, namely, environmental impact, equity, and impact on peer reviewing.
We capture existing (dis)parities between different and within groups with respect to seniority, academia, and industry.
We devise recommendations to mitigate found disparities, some of which already successfully implemented.
arXiv Detail & Related papers (2023-06-29T12:44:53Z) - Thorny Roses: Investigating the Dual Use Dilemma in Natural Language
Processing [45.72382504913193]
We conduct a survey of NLP researchers and practitioners to understand the depth and their perspective of the problem.
Based on the results of our survey, we offer a definition of dual use that is tailored to the needs of the NLP community.
We discuss the current state and potential means for mitigating dual use in NLP and propose a checklist that can be integrated into existing conference ethics-frameworks.
arXiv Detail & Related papers (2023-04-17T14:37:43Z) - A Survey on Bias and Fairness in Natural Language Processing [1.713291434132985]
We analyze the origins of biases, the definitions of fairness, and how different subfields of NLP bias can be mitigated.
We discuss how future studies can work towards eradicating pernicious biases from NLP algorithms.
arXiv Detail & Related papers (2022-03-06T18:12:30Z) - Causal Inference in Natural Language Processing: Estimation, Prediction,
Interpretation and Beyond [38.055142444836925]
We consolidate research across academic areas and situate it in the broader Natural Language Processing landscape.
We introduce the statistical challenge of estimating causal effects, encompassing settings where text is used as an outcome, treatment, or as a means to address confounding.
In addition, we explore potential uses of causal inference to improve the performance, robustness, fairness, and interpretability of NLP models.
arXiv Detail & Related papers (2021-09-02T05:40:08Z) - Sentiment Analysis Based on Deep Learning: A Comparative Study [69.09570726777817]
The study of public opinion can provide us with valuable information.
The efficiency and accuracy of sentiment analysis is being hindered by the challenges encountered in natural language processing.
This paper reviews the latest studies that have employed deep learning to solve sentiment analysis problems.
arXiv Detail & Related papers (2020-06-05T16:28:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.