Beyond Good Intentions: Reporting the Research Landscape of NLP for
Social Good
- URL: http://arxiv.org/abs/2305.05471v3
- Date: Sat, 21 Oct 2023 13:41:57 GMT
- Title: Beyond Good Intentions: Reporting the Research Landscape of NLP for
Social Good
- Authors: Fernando Gonzalez, Zhijing Jin, Bernhard Sch\"olkopf, Tom Hope,
Mrinmaya Sachan, Rada Mihalcea
- Abstract summary: We introduce NLP4SG Papers, a scientific dataset with three associated tasks.
These tasks help identify NLP4SG papers and characterize the NLP4SG landscape.
We use state-of-the-art NLP models to address each of these tasks and use them on the entire ACL Anthology.
- Score: 115.1507728564964
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: With the recent advances in natural language processing (NLP), a vast number
of applications have emerged across various use cases. Among the plethora of
NLP applications, many academic researchers are motivated to do work that has a
positive social impact, in line with the recent initiatives of NLP for Social
Good (NLP4SG). However, it is not always obvious to researchers how their
research efforts are tackling today's big social problems. Thus, in this paper,
we introduce NLP4SG Papers, a scientific dataset with three associated tasks
that can help identify NLP4SG papers and characterize the NLP4SG landscape by:
(1) identifying the papers that address a social problem, (2) mapping them to
the corresponding UN Sustainable Development Goals (SDGs), and (3) identifying
the task they are solving and the methods they are using. Using
state-of-the-art NLP models, we address each of these tasks and use them on the
entire ACL Anthology, resulting in a visualization workspace that gives
researchers a comprehensive overview of the field of NLP4SG. Our website is
available at https://nlp4sg.vercel.app. We released our data at
https://huggingface.co/datasets/feradauto/NLP4SGPapers and code at
https://github.com/feradauto/nlp4sg
Related papers
- Large Language Models Meet NLP: A Survey [79.74450825763851]
Large language models (LLMs) have shown impressive capabilities in Natural Language Processing (NLP) tasks.
This study aims to address this gap by exploring the following questions.
arXiv Detail & Related papers (2024-05-21T14:24:01Z) - What Can Natural Language Processing Do for Peer Review? [173.8912784451817]
In modern science, peer review is widely used, yet it is hard, time-consuming, and prone to error.
Since the artifacts involved in peer review are largely text-based, Natural Language Processing has great potential to improve reviewing.
We detail each step of the process from manuscript submission to camera-ready revision, and discuss the associated challenges and opportunities for NLP assistance.
arXiv Detail & Related papers (2024-05-10T16:06:43Z) - The Shifted and The Overlooked: A Task-oriented Investigation of
User-GPT Interactions [114.67699010359637]
We analyze a large-scale collection of real user queries to GPT.
We find that tasks such as design'' and planning'' are prevalent in user interactions but are largely neglected or different from traditional NLP benchmarks.
arXiv Detail & Related papers (2023-10-19T02:12:17Z) - HugNLP: A Unified and Comprehensive Library for Natural Language
Processing [14.305751154503133]
We introduce HugNLP, a library for natural language processing (NLP) with the prevalent backend of HuggingFace Transformers.
HugNLP consists of a hierarchical structure including models, processors and applications that unifies the learning process of pre-trained language models (PLMs) on different NLP tasks.
arXiv Detail & Related papers (2023-02-28T03:38:26Z) - Meta Learning for Natural Language Processing: A Survey [88.58260839196019]
Deep learning has been the mainstream technique in natural language processing (NLP) area.
Deep learning requires many labeled data and is less generalizable across domains.
Meta-learning is an arising field in machine learning studying approaches to learn better algorithms.
arXiv Detail & Related papers (2022-05-03T13:58:38Z) - Graph Neural Networks for Natural Language Processing: A Survey [64.36633422999905]
We present a comprehensive overview onGraph Neural Networks (GNNs) for Natural Language Processing.
We propose a new taxonomy of GNNs for NLP, which organizes existing research of GNNs for NLP along three axes: graph construction,graph representation learning, and graph based encoder-decoder models.
arXiv Detail & Related papers (2021-06-10T23:59:26Z) - How Good Is NLP? A Sober Look at NLP Tasks through the Lens of Social
Impact [31.435252562175194]
We propose a framework to evaluate NLP tasks' direct and indirect real-world impact.
We adopt the methodology of global priorities research to identify priority causes for NLP research.
Finally, we use our theoretical framework to provide some practical guidelines for future NLP research for social good.
arXiv Detail & Related papers (2021-06-04T09:17:15Z) - Natural Language Processing 4 All (NLP4All): A New Online Platform for
Teaching and Learning NLP Concepts [0.0]
Natural Language Processing offers new insights into language data across almost all disciplines and domains.
The primary hurdles to widening participation in and use of these new research tools are a lack of coding skills in students across K-16, and in the population at large.
To broaden participation in NLP and improve NLP-literacy, we introduced a new tool web-based tool called Natural Language Processing 4 All (NLP4All)
The intended purpose of NLP4All is to help teachers facilitate learning with and about NLP, by providing easy-to-use interfaces to NLP-methods, data, and analyses.
arXiv Detail & Related papers (2021-05-28T09:57:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.