A Diachronic Analysis of Paradigm Shifts in NLP Research: When, How, and
Why?
- URL: http://arxiv.org/abs/2305.12920v3
- Date: Wed, 25 Oct 2023 11:56:49 GMT
- Title: A Diachronic Analysis of Paradigm Shifts in NLP Research: When, How, and
Why?
- Authors: Aniket Pramanick, Yufang Hou, Saif M. Mohammad, Iryna Gurevych
- Abstract summary: We propose a systematic framework for analyzing the evolution of research topics in a scientific field using causal discovery and inference techniques.
We define three variables to encompass diverse facets of the evolution of research topics within NLP.
We utilize a causal discovery algorithm to unveil the causal connections among these variables using observational data.
- Score: 84.46288849132634
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Understanding the fundamental concepts and trends in a scientific field is
crucial for keeping abreast of its continuous advancement. In this study, we
propose a systematic framework for analyzing the evolution of research topics
in a scientific field using causal discovery and inference techniques. We
define three variables to encompass diverse facets of the evolution of research
topics within NLP and utilize a causal discovery algorithm to unveil the causal
connections among these variables using observational data. Subsequently, we
leverage this structure to measure the intensity of these relationships. By
conducting extensive experiments on the ACL Anthology corpus, we demonstrate
that our framework effectively uncovers evolutionary trends and the underlying
causes for a wide range of NLP research topics. Specifically, we show that
tasks and methods are primary drivers of research in NLP, with datasets
following, while metrics have minimal impact.
Related papers
- A Comprehensive Survey of Scientific Large Language Models and Their Applications in Scientific Discovery [68.48094108571432]
Large language models (LLMs) have revolutionized the way text and other modalities of data are handled.
We aim to provide a more holistic view of the research landscape by unveiling cross-field and cross-modal connections between scientific LLMs.
arXiv Detail & Related papers (2024-06-16T08:03:24Z) - Discovery of the Hidden World with Large Language Models [95.58823685009727]
This paper presents Causal representatiOn AssistanT (COAT) that introduces large language models (LLMs) to bridge the gap.
LLMs are trained on massive observations of the world and have demonstrated great capability in extracting key information from unstructured data.
COAT also adopts CDs to find causal relations among the identified variables as well as to provide feedback to LLMs to iteratively refine the proposed factors.
arXiv Detail & Related papers (2024-02-06T12:18:54Z) - Exploring the Landscape of Natural Language Processing Research [3.3916160303055567]
Several NLP-related approaches have been surveyed in the research community.
A comprehensive study that categorizes established topics, identifies trends, and outlines areas for future research remains absent.
As a result, we present a structured overview of the research landscape, provide a taxonomy of fields of study in NLP, analyze recent developments in NLP, summarize our findings, and highlight directions for future work.
arXiv Detail & Related papers (2023-07-20T07:33:30Z) - An information-theoretic perspective on intrinsic motivation in
reinforcement learning: a survey [0.0]
We propose to survey these research works through a new taxonomy based on information theory.
We computationally revisit the notions of surprise, novelty and skill learning.
Our analysis suggests that novelty and surprise can assist the building of a hierarchy of transferable skills.
arXiv Detail & Related papers (2022-09-19T09:47:43Z) - A Review and Roadmap of Deep Learning Causal Discovery in Different
Variable Paradigms [15.483478537540385]
This paper divides the possible causal discovery tasks into three types according to the variable paradigm.
We then define and instantiate the relevant datasets for each task and the final causal model constructed at the same time.
We propose some roadmaps from different perspectives for the current research gaps in the field of causal discovery.
arXiv Detail & Related papers (2022-09-14T01:52:17Z) - Research Trends and Applications of Data Augmentation Algorithms [77.34726150561087]
We identify the main areas of application of data augmentation algorithms, the types of algorithms used, significant research trends, their progression over time and research gaps in data augmentation literature.
We expect readers to understand the potential of data augmentation, as well as identify future research directions and open questions within data augmentation research.
arXiv Detail & Related papers (2022-07-18T11:38:32Z) - Research Topic Flows in Co-Authorship Networks [0.0]
We propose a graph structure for the analysis of research topic flows between scientific authors and their respective research fields.
Our method requires for the construction of a TFN solely a corpus of publications (i.e., author and abstract information)
We demonstrate the utility of TFNs by applying our method to two comprehensive corpora of altogether 20 Mio. publications spanning more than 60 years of research in the fields computer science and mathematics.
arXiv Detail & Related papers (2022-06-16T07:45:53Z) - Research topic trend prediction of scientific papers based on spatial
enhancement and dynamic graph convolution network [6.73620879761516]
In recent years, with the increase of social investment in scientific research, the number of research results in various fields has increased significantly.
Due to the increasingly correlation close between various research themes, there is a certain dependency relationship between a large number of research themes.
We propose a deep neural network-based research topic hotness prediction algorithm, atemporal convolutional network model.
arXiv Detail & Related papers (2022-03-30T12:38:52Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - A Survey on Causal Inference [64.45536158710014]
Causal inference is a critical research topic across many domains, such as statistics, computer science, education, public policy and economics.
Various causal effect estimation methods for observational data have sprung up.
arXiv Detail & Related papers (2020-02-05T21:35:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.