Identifying Causal Influences on Publication Trends and Behavior: A Case
Study of the Computational Linguistics Community
- URL: http://arxiv.org/abs/2110.07938v1
- Date: Fri, 15 Oct 2021 08:36:13 GMT
- Title: Identifying Causal Influences on Publication Trends and Behavior: A Case
Study of the Computational Linguistics Community
- Authors: Maria Glenski and Svitlana Volkova
- Abstract summary: We present mixed-method analyses to investigate causal influences of publication trends and behavior.
Key findings highlight the transition to rapidly emerging methodologies in the research community.
We anticipate this work to provide useful insights about publication trends and behavior.
- Score: 10.791197825505755
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Drawing causal conclusions from observational real-world data is a very much
desired but challenging task. In this paper we present mixed-method analyses to
investigate causal influences of publication trends and behavior on the
adoption, persistence, and retirement of certain research foci --
methodologies, materials, and tasks that are of interest to the computational
linguistics (CL) community. Our key findings highlight evidence of the
transition to rapidly emerging methodologies in the research community (e.g.,
adoption of bidirectional LSTMs influencing the retirement of LSTMs), the
persistent engagement with trending tasks and techniques (e.g., deep learning,
embeddings, generative, and language models), the effect of scientist location
from outside the US, e.g., China on propensity of researching languages beyond
English, and the potential impact of funding for large-scale research programs.
We anticipate this work to provide useful insights about publication trends and
behavior and raise the awareness about the potential for causal inference in
the computational linguistics and a broader scientific community.
Related papers
- Collaborative Participatory Research with LLM Agents in South Asia: An Empirically-Grounded Methodological Initiative and Agenda from Field Evidence in Sri Lanka [4.2784137244658025]
This paper presents an empirically grounded methodological framework designed to transform participatory development research.
It is situated in the challenging multilingual context of Sri Lanka's flood-prone Nilwala River Basin.
This research agenda advocates for AI-driven participatory research tools that maintain ethical considerations, cultural respect, and operational efficiency.
arXiv Detail & Related papers (2024-11-13T02:21:59Z) - Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - Decoding Large-Language Models: A Systematic Overview of Socio-Technical Impacts, Constraints, and Emerging Questions [1.1970409518725493]
The article highlights the application areas that could have a positive impact on society along with the ethical considerations.
It includes responsible development considerations, algorithmic improvements, ethical challenges, and societal implications.
arXiv Detail & Related papers (2024-09-25T14:36:30Z) - From Pre-training Corpora to Large Language Models: What Factors Influence LLM Performance in Causal Discovery Tasks? [51.42906577386907]
This study explores the factors influencing the performance of Large Language Models (LLMs) in causal discovery tasks.
A higher frequency of causal mentions correlates with better model performance, suggesting that extensive exposure to causal information during training enhances the models' causal discovery capabilities.
arXiv Detail & Related papers (2024-07-29T01:45:05Z) - Data-Centric AI in the Age of Large Language Models [51.20451986068925]
This position paper proposes a data-centric viewpoint of AI research, focusing on large language models (LLMs)
We make the key observation that data is instrumental in the developmental (e.g., pretraining and fine-tuning) and inferential stages (e.g., in-context learning) of LLMs.
We identify four specific scenarios centered around data, covering data-centric benchmarks and data curation, data attribution, knowledge transfer, and inference contextualization.
arXiv Detail & Related papers (2024-06-20T16:34:07Z) - Igniting Language Intelligence: The Hitchhiker's Guide From
Chain-of-Thought Reasoning to Language Agents [80.5213198675411]
Large language models (LLMs) have dramatically enhanced the field of language intelligence.
LLMs leverage the intriguing chain-of-thought (CoT) reasoning techniques, obliging them to formulate intermediate steps en route to deriving an answer.
Recent research endeavors have extended CoT reasoning methodologies to nurture the development of autonomous language agents.
arXiv Detail & Related papers (2023-11-20T14:30:55Z) - Mapping Computer Science Research: Trends, Influences, and Predictions [0.0]
We employ advanced machine learning techniques, including Decision Tree and Logistic Regression models, to predict trending research areas.
Our analysis reveals that the number of references cited in research papers (Reference Count) plays a pivotal role in determining trending research areas.
The Logistic Regression model outperforms the Decision Tree model in predicting trends, exhibiting higher accuracy, precision, recall, and F1 score.
arXiv Detail & Related papers (2023-08-01T16:59:25Z) - A Diachronic Analysis of Paradigm Shifts in NLP Research: When, How, and
Why? [84.46288849132634]
We propose a systematic framework for analyzing the evolution of research topics in a scientific field using causal discovery and inference techniques.
We define three variables to encompass diverse facets of the evolution of research topics within NLP.
We utilize a causal discovery algorithm to unveil the causal connections among these variables using observational data.
arXiv Detail & Related papers (2023-05-22T11:08:00Z) - Expanding the Role of Affective Phenomena in Multimodal Interaction
Research [57.069159905961214]
We examined over 16,000 papers from selected conferences in multimodal interaction, affective computing, and natural language processing.
We identify 910 affect-related papers and present our analysis of the role of affective phenomena in these papers.
We find limited research on how affect and emotion predictions might be used by AI systems to enhance machine understanding of human social behaviors and cognitive states.
arXiv Detail & Related papers (2023-05-18T09:08:39Z) - Causal Inference in Natural Language Processing: Estimation, Prediction,
Interpretation and Beyond [38.055142444836925]
We consolidate research across academic areas and situate it in the broader Natural Language Processing landscape.
We introduce the statistical challenge of estimating causal effects, encompassing settings where text is used as an outcome, treatment, or as a means to address confounding.
In addition, we explore potential uses of causal inference to improve the performance, robustness, fairness, and interpretability of NLP models.
arXiv Detail & Related papers (2021-09-02T05:40:08Z) - Multi-Agent Reinforcement Learning as a Computational Tool for Language
Evolution Research: Historical Context and Future Challenges [21.021451344428716]
Computational models of emergent communication in agent populations are currently gaining interest in the machine learning community due to recent advances in Multi-Agent Reinforcement Learning (MARL)
The goal of this paper is to position recent MARL contributions within the historical context of language evolution research, as well as to extract from this theoretical and computational background a few challenges for future research.
arXiv Detail & Related papers (2020-02-20T17:26:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.