Beyond Good Intentions: Reporting the Research Landscape of NLP for
Social Good
- URL: http://arxiv.org/abs/2305.05471v3
- Date: Sat, 21 Oct 2023 13:41:57 GMT
- Title: Beyond Good Intentions: Reporting the Research Landscape of NLP for
Social Good
- Authors: Fernando Gonzalez, Zhijing Jin, Bernhard Sch\"olkopf, Tom Hope,
Mrinmaya Sachan, Rada Mihalcea
- Abstract summary: We introduce NLP4SG Papers, a scientific dataset with three associated tasks.
These tasks help identify NLP4SG papers and characterize the NLP4SG landscape.
We use state-of-the-art NLP models to address each of these tasks and use them on the entire ACL Anthology.
- Score: 115.1507728564964
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: With the recent advances in natural language processing (NLP), a vast number
of applications have emerged across various use cases. Among the plethora of
NLP applications, many academic researchers are motivated to do work that has a
positive social impact, in line with the recent initiatives of NLP for Social
Good (NLP4SG). However, it is not always obvious to researchers how their
research efforts are tackling today's big social problems. Thus, in this paper,
we introduce NLP4SG Papers, a scientific dataset with three associated tasks
that can help identify NLP4SG papers and characterize the NLP4SG landscape by:
(1) identifying the papers that address a social problem, (2) mapping them to
the corresponding UN Sustainable Development Goals (SDGs), and (3) identifying
the task they are solving and the methods they are using. Using
state-of-the-art NLP models, we address each of these tasks and use them on the
entire ACL Anthology, resulting in a visualization workspace that gives
researchers a comprehensive overview of the field of NLP4SG. Our website is
available at https://nlp4sg.vercel.app. We released our data at
https://huggingface.co/datasets/feradauto/NLP4SGPapers and code at
https://github.com/feradauto/nlp4sg
Related papers
- The Nature of NLP: Analyzing Contributions in NLP Papers [77.31665252336157]
We quantitatively investigate what constitutes NLP research by examining research papers.
Our findings reveal a rising involvement of machine learning in NLP since the early nineties.
In post-2020, there has been a resurgence of focus on language and people.
arXiv Detail & Related papers (2024-09-29T01:29:28Z) - Large Language Models Meet NLP: A Survey [79.74450825763851]
Large language models (LLMs) have shown impressive capabilities in Natural Language Processing (NLP) tasks.
This study aims to address this gap by exploring the following questions.
arXiv Detail & Related papers (2024-05-21T14:24:01Z) - What Can Natural Language Processing Do for Peer Review? [173.8912784451817]
In modern science, peer review is widely used, yet it is hard, time-consuming, and prone to error.
Since the artifacts involved in peer review are largely text-based, Natural Language Processing has great potential to improve reviewing.
We detail each step of the process from manuscript submission to camera-ready revision, and discuss the associated challenges and opportunities for NLP assistance.
arXiv Detail & Related papers (2024-05-10T16:06:43Z) - The Shifted and The Overlooked: A Task-oriented Investigation of
User-GPT Interactions [114.67699010359637]
We analyze a large-scale collection of real user queries to GPT.
We find that tasks such as design'' and planning'' are prevalent in user interactions but are largely neglected or different from traditional NLP benchmarks.
arXiv Detail & Related papers (2023-10-19T02:12:17Z) - HugNLP: A Unified and Comprehensive Library for Natural Language
Processing [14.305751154503133]
We introduce HugNLP, a library for natural language processing (NLP) with the prevalent backend of HuggingFace Transformers.
HugNLP consists of a hierarchical structure including models, processors and applications that unifies the learning process of pre-trained language models (PLMs) on different NLP tasks.
arXiv Detail & Related papers (2023-02-28T03:38:26Z) - Meta Learning for Natural Language Processing: A Survey [88.58260839196019]
Deep learning has been the mainstream technique in natural language processing (NLP) area.
Deep learning requires many labeled data and is less generalizable across domains.
Meta-learning is an arising field in machine learning studying approaches to learn better algorithms.
arXiv Detail & Related papers (2022-05-03T13:58:38Z) - How Good Is NLP? A Sober Look at NLP Tasks through the Lens of Social
Impact [31.435252562175194]
We propose a framework to evaluate NLP tasks' direct and indirect real-world impact.
We adopt the methodology of global priorities research to identify priority causes for NLP research.
Finally, we use our theoretical framework to provide some practical guidelines for future NLP research for social good.
arXiv Detail & Related papers (2021-06-04T09:17:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.