Linking Theories and Methods in Cognitive Sciences via Joint Embedding
of the Scientific Literature: The Example of Cognitive Control
- URL: http://arxiv.org/abs/2203.11016v1
- Date: Wed, 16 Mar 2022 11:03:09 GMT
- Title: Linking Theories and Methods in Cognitive Sciences via Joint Embedding
of the Scientific Literature: The Example of Cognitive Control
- Authors: Morteza Ansarinia, Paul Schrater, Pedro Cardoso-Leite
- Abstract summary: We present an alternative approach to linking theory and practice of Cognitive Control.
We performed automated text analyses on a large body of scientific texts to create a joint representation of tasks and constructs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traditionally, theory and practice of Cognitive Control are linked via
literature reviews by human domain experts. This approach, however, is
inadequate to track the ever-growing literature. It may also be biased, and
yield redundancies and confusion. Here we present an alternative approach. We
performed automated text analyses on a large body of scientific texts to create
a joint representation of tasks and constructs. More specifically, 531,748
scientific abstracts were first mapped into an embedding space using a
transformers-based language model. Document embeddings were then used to
identify a task-construct graph embedding that grounds constructs on tasks and
supports nuanced meaning of the constructs by taking advantage of constrained
random walks in the graph. This joint task-construct graph embedding, can be
queried to generate task batteries targeting specific constructs, may reveal
knowledge gaps in the literature, and inspire new tasks and novel hypotheses.
Related papers
- Detecting text level intellectual influence with knowledge graph embeddings [0.0]
We collect a corpus of open source journal articles and generate Knowledge Graph representations using the Gemini LLM.
We attempt to predict the existence of citations between sampled pairs of articles using previously published methods and a novel Graph Neural Network based embedding model.
arXiv Detail & Related papers (2024-10-31T15:21:27Z) - ATLANTIC: Structure-Aware Retrieval-Augmented Language Model for
Interdisciplinary Science [0.0]
Large language models record impressive performance on many natural language processing tasks.
Retrieval augmentation offers an effective solution by retrieving context from external knowledge sources.
We propose a novel structure-aware retrieval augmented language model that accommodates document structure during retrieval augmentation.
arXiv Detail & Related papers (2023-11-21T02:02:46Z) - Using Natural Language Processing and Networks to Automate Structured Literature Reviews: An Application to Farmers Climate Change Adaptation [0.0]
This work aims to sensibly use Natural Language Processing by extracting variables relations and synthesizing their findings using networks.
As an example, we apply our methodology to the analysis of farmers' adaptation to climate change.
Results show that the use of Natural Language Processing together with networks in a descriptive manner offers a fast and interpretable way to synthesize literature review findings.
arXiv Detail & Related papers (2023-06-16T10:05:47Z) - Pre-training Multi-task Contrastive Learning Models for Scientific
Literature Understanding [52.723297744257536]
Pre-trained language models (LMs) have shown effectiveness in scientific literature understanding tasks.
We propose a multi-task contrastive learning framework, SciMult, to facilitate common knowledge sharing across different literature understanding tasks.
arXiv Detail & Related papers (2023-05-23T16:47:22Z) - CitationIE: Leveraging the Citation Graph for Scientific Information
Extraction [89.33938657493765]
We use the citation graph of referential links between citing and cited papers.
We observe a sizable improvement in end-to-end information extraction over the state-of-the-art.
arXiv Detail & Related papers (2021-06-03T03:00:12Z) - What's New? Summarizing Contributions in Scientific Literature [85.95906677964815]
We introduce a new task of disentangled paper summarization, which seeks to generate separate summaries for the paper contributions and the context of the work.
We extend the S2ORC corpus of academic articles by adding disentangled "contribution" and "context" reference labels.
We propose a comprehensive automatic evaluation protocol which reports the relevance, novelty, and disentanglement of generated outputs.
arXiv Detail & Related papers (2020-11-06T02:23:01Z) - Positioning yourself in the maze of Neural Text Generation: A
Task-Agnostic Survey [54.34370423151014]
This paper surveys the components of modeling approaches relaying task impacts across various generation tasks such as storytelling, summarization, translation etc.
We present an abstraction of the imperative techniques with respect to learning paradigms, pretraining, modeling approaches, decoding and the key challenges outstanding in the field in each of them.
arXiv Detail & Related papers (2020-10-14T17:54:42Z) - Summarizing Text on Any Aspects: A Knowledge-Informed Weakly-Supervised
Approach [89.56158561087209]
We study summarizing on arbitrary aspects relevant to the document.
Due to the lack of supervision data, we develop a new weak supervision construction method and an aspect modeling scheme.
Experiments show our approach achieves performance boosts on summarizing both real and synthetic documents.
arXiv Detail & Related papers (2020-10-14T03:20:46Z) - A Scientific Information Extraction Dataset for Nature Inspired
Engineering [12.819150283584328]
This paper describes a dataset of 1,500 manually-annotated sentences that express domain-independent relations between central concepts in a scientific biology text.
The arguments of these relations can be Multi Word Expressions and have been annotated with modifying phrases to form non-projective graphs.
The dataset allows for training and evaluating Relation Extraction algorithms that aim for coarse-grained typing of scientific biological documents.
arXiv Detail & Related papers (2020-05-15T19:25:12Z) - Explaining Relationships Between Scientific Documents [55.23390424044378]
We address the task of explaining relationships between two scientific documents using natural language text.
In this paper we establish a dataset of 622K examples from 154K documents.
arXiv Detail & Related papers (2020-02-02T03:54:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.