Identifying Causal Influences on Publication Trends and Behavior: A Case
Study of the Computational Linguistics Community
- URL: http://arxiv.org/abs/2110.07938v1
- Date: Fri, 15 Oct 2021 08:36:13 GMT
- Title: Identifying Causal Influences on Publication Trends and Behavior: A Case
Study of the Computational Linguistics Community
- Authors: Maria Glenski and Svitlana Volkova
- Abstract summary: We present mixed-method analyses to investigate causal influences of publication trends and behavior.
Key findings highlight the transition to rapidly emerging methodologies in the research community.
We anticipate this work to provide useful insights about publication trends and behavior.
- Score: 10.791197825505755
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Drawing causal conclusions from observational real-world data is a very much
desired but challenging task. In this paper we present mixed-method analyses to
investigate causal influences of publication trends and behavior on the
adoption, persistence, and retirement of certain research foci --
methodologies, materials, and tasks that are of interest to the computational
linguistics (CL) community. Our key findings highlight evidence of the
transition to rapidly emerging methodologies in the research community (e.g.,
adoption of bidirectional LSTMs influencing the retirement of LSTMs), the
persistent engagement with trending tasks and techniques (e.g., deep learning,
embeddings, generative, and language models), the effect of scientist location
from outside the US, e.g., China on propensity of researching languages beyond
English, and the potential impact of funding for large-scale research programs.
We anticipate this work to provide useful insights about publication trends and
behavior and raise the awareness about the potential for causal inference in
the computational linguistics and a broader scientific community.
Related papers
- Data-Centric AI in the Age of Large Language Models [51.20451986068925]
This position paper proposes a data-centric viewpoint of AI research, focusing on large language models (LLMs)
We make the key observation that data is instrumental in the developmental (e.g., pretraining and fine-tuning) and inferential stages (e.g., in-context learning) of LLMs.
We identify four specific scenarios centered around data, covering data-centric benchmarks and data curation, data attribution, knowledge transfer, and inference contextualization.
arXiv Detail & Related papers (2024-06-20T16:34:07Z) - Igniting Language Intelligence: The Hitchhiker's Guide From
Chain-of-Thought Reasoning to Language Agents [80.5213198675411]
Large language models (LLMs) have dramatically enhanced the field of language intelligence.
LLMs leverage the intriguing chain-of-thought (CoT) reasoning techniques, obliging them to formulate intermediate steps en route to deriving an answer.
Recent research endeavors have extended CoT reasoning methodologies to nurture the development of autonomous language agents.
arXiv Detail & Related papers (2023-11-20T14:30:55Z) - Trends in Integration of Knowledge and Large Language Models: A Survey
and Taxonomy of Methods, Benchmarks, and Applications [42.61727038213399]
Large language models (LLMs) exhibit superior performance on various natural language tasks, but they are susceptible to issues stemming from outdated data and domain-specific limitations.
We propose a review to discuss the trends in integration of knowledge and large language models, including taxonomy of methods, benchmarks, and applications.
arXiv Detail & Related papers (2023-11-10T05:24:04Z) - Bias and Fairness in Large Language Models: A Survey [73.87651986156006]
We present a comprehensive survey of bias evaluation and mitigation techniques for large language models (LLMs)
We first consolidate, formalize, and expand notions of social bias and fairness in natural language processing.
We then unify the literature by proposing three intuitive, two for bias evaluation, and one for mitigation.
arXiv Detail & Related papers (2023-09-02T00:32:55Z) - Mapping Computer Science Research: Trends, Influences, and Predictions [0.0]
We employ advanced machine learning techniques, including Decision Tree and Logistic Regression models, to predict trending research areas.
Our analysis reveals that the number of references cited in research papers (Reference Count) plays a pivotal role in determining trending research areas.
The Logistic Regression model outperforms the Decision Tree model in predicting trends, exhibiting higher accuracy, precision, recall, and F1 score.
arXiv Detail & Related papers (2023-08-01T16:59:25Z) - A Diachronic Analysis of Paradigm Shifts in NLP Research: When, How, and
Why? [84.46288849132634]
We propose a systematic framework for analyzing the evolution of research topics in a scientific field using causal discovery and inference techniques.
We define three variables to encompass diverse facets of the evolution of research topics within NLP.
We utilize a causal discovery algorithm to unveil the causal connections among these variables using observational data.
arXiv Detail & Related papers (2023-05-22T11:08:00Z) - Expanding the Role of Affective Phenomena in Multimodal Interaction
Research [57.069159905961214]
We examined over 16,000 papers from selected conferences in multimodal interaction, affective computing, and natural language processing.
We identify 910 affect-related papers and present our analysis of the role of affective phenomena in these papers.
We find limited research on how affect and emotion predictions might be used by AI systems to enhance machine understanding of human social behaviors and cognitive states.
arXiv Detail & Related papers (2023-05-18T09:08:39Z) - A Survey on In-context Learning [75.41718234460895]
In-context learning (ICL) has emerged as a new paradigm for natural language processing (NLP)
We first present a formal definition of ICL and clarify its correlation to related studies.
We then organize and discuss advanced techniques, including training strategies, prompt designing strategies, and related analysis.
arXiv Detail & Related papers (2022-12-31T15:57:09Z) - Computational Inference in Cognitive Science: Operational, Societal and
Ethical Considerations [13.173307471333619]
computational advances have transformed cognitive science into a data-driven field.
There is a proliferation of cognitive theories investigated and interpreted from different academic lens.
We identify the operational challenges, societal impacts and ethical guidelines in conducting research.
arXiv Detail & Related papers (2022-10-24T18:27:27Z) - Causal Inference in Natural Language Processing: Estimation, Prediction,
Interpretation and Beyond [38.055142444836925]
We consolidate research across academic areas and situate it in the broader Natural Language Processing landscape.
We introduce the statistical challenge of estimating causal effects, encompassing settings where text is used as an outcome, treatment, or as a means to address confounding.
In addition, we explore potential uses of causal inference to improve the performance, robustness, fairness, and interpretability of NLP models.
arXiv Detail & Related papers (2021-09-02T05:40:08Z) - Multi-Agent Reinforcement Learning as a Computational Tool for Language
Evolution Research: Historical Context and Future Challenges [21.021451344428716]
Computational models of emergent communication in agent populations are currently gaining interest in the machine learning community due to recent advances in Multi-Agent Reinforcement Learning (MARL)
The goal of this paper is to position recent MARL contributions within the historical context of language evolution research, as well as to extract from this theoretical and computational background a few challenges for future research.
arXiv Detail & Related papers (2020-02-20T17:26:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.