Exploration and Discovery of the COVID-19 Literature through Semantic
Visualization
- URL: http://arxiv.org/abs/2007.01800v1
- Date: Fri, 3 Jul 2020 16:40:37 GMT
- Title: Exploration and Discovery of the COVID-19 Literature through Semantic
Visualization
- Authors: Jingxuan Tu, Marc Verhagen, Brent Cochran, James Pustejovsky
- Abstract summary: We are developing semantic visualization techniques to enhance exploration and enable discovery over large datasets of relations.
Our hope is that this will enable the discovery of novel inferences over relations in complex data that otherwise would go unnoticed.
- Score: 9.687961759392559
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We are developing semantic visualization techniques in order to enhance
exploration and enable discovery over large datasets of complex networks of
relations. Semantic visualization is a method of enabling exploration and
discovery over large datasets of complex networks by exploiting the semantics
of the relations in them. This involves (i) NLP to extract named entities,
relations and knowledge graphs from the original data; (ii) indexing the output
and creating representations for all relevant entities and relations that can
be visualized in many different ways, e.g., as tag clouds, heat maps, graphs,
etc.; (iii) applying parameter reduction operations to the extracted relations,
creating "relation containers", or functional entities that can also be
visualized using the same methods, allowing the visualization of multiple
relations, partial pathways, and exploration across multiple dimensions. Our
hope is that this will enable the discovery of novel inferences over relations
in complex data that otherwise would go unnoticed. We have applied this to
analysis of the recently released CORD-19 dataset.
Related papers
- Graph-Augmented Relation Extraction Model with LLMs-Generated Support Document [7.0421339410165045]
This study introduces a novel approach to sentence-level relation extraction (RE)
It integrates Graph Neural Networks (GNNs) with Large Language Models (LLMs) to generate contextually enriched support documents.
Our experiments, conducted on the CrossRE dataset, demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-10-30T20:48:34Z) - Graphical Reasoning: LLM-based Semi-Open Relation Extraction [3.2586315449885106]
We show how leveraging in-context learning with GPT-3.5 can significantly enhance the extraction process.
We introduce a novel graphical reasoning approach that dissects relation extraction into sequential sub-tasks.
arXiv Detail & Related papers (2024-04-30T21:41:53Z) - LLM-Enhanced User-Item Interactions: Leveraging Edge Information for
Optimized Recommendations [28.77605585519833]
Graph neural networks, as a popular research area in recent years, have numerous studies on relationship mining.
Current cutting-edge research in graph neural networks has not been effectively integrated with large language models.
We propose an innovative framework that combines the strong contextual representation capabilities of LLMs with the relationship extraction and analysis functions of GNNs.
arXiv Detail & Related papers (2024-02-14T23:12:09Z) - ViRel: Unsupervised Visual Relations Discovery with Graph-level Analogy [65.5580334698777]
ViRel is a method for unsupervised discovery and learning of Visual Relations with graph-level analogy.
We show that our method achieves above 95% accuracy in relation classification.
We further generalizes to unseen tasks with more complicated relational structures.
arXiv Detail & Related papers (2022-07-04T16:56:45Z) - On Neural Architecture Inductive Biases for Relational Tasks [76.18938462270503]
We introduce a simple architecture based on similarity-distribution scores which we name Compositional Network generalization (CoRelNet)
We find that simple architectural choices can outperform existing models in out-of-distribution generalizations.
arXiv Detail & Related papers (2022-06-09T16:24:01Z) - Learning Relation-Specific Representations for Few-shot Knowledge Graph
Completion [24.880078645503417]
We propose a Relation-Specific Context Learning framework, which exploits graph contexts of triples to capture semantic information of relations and entities simultaneously.
Experimental results on two public datasets demonstrate that RSCL outperforms state-of-the-art FKGC methods.
arXiv Detail & Related papers (2022-03-22T11:45:48Z) - Learning the Implicit Semantic Representation on Graph-Structured Data [57.670106959061634]
Existing representation learning methods in graph convolutional networks are mainly designed by describing the neighborhood of each node as a perceptual whole.
We propose a Semantic Graph Convolutional Networks (SGCN) that explores the implicit semantics by learning latent semantic-paths in graphs.
arXiv Detail & Related papers (2021-01-16T16:18:43Z) - Context-Enhanced Entity and Relation Embedding for Knowledge Graph
Completion [2.580765958706854]
We propose a model named AggrE, which conducts efficient aggregations on entity context and relation context in multi-hops.
Experiment results show that AggrE is competitive to existing models.
arXiv Detail & Related papers (2020-12-13T09:20:42Z) - Learning Relation Prototype from Unlabeled Texts for Long-tail Relation
Extraction [84.64435075778988]
We propose a general approach to learn relation prototypes from unlabeled texts.
We learn relation prototypes as an implicit factor between entities.
We conduct experiments on two publicly available datasets: New York Times and Google Distant Supervision.
arXiv Detail & Related papers (2020-11-27T06:21:12Z) - Cross-Supervised Joint-Event-Extraction with Heterogeneous Information
Networks [61.950353376870154]
Joint-event-extraction is a sequence-to-sequence labeling task with a tag set composed of tags of triggers and entities.
We propose a Cross-Supervised Mechanism (CSM) to alternately supervise the extraction of triggers or entities.
Our approach outperforms the state-of-the-art methods in both entity and trigger extraction.
arXiv Detail & Related papers (2020-10-13T11:51:17Z) - Relation-Guided Representation Learning [53.60351496449232]
We propose a new representation learning method that explicitly models and leverages sample relations.
Our framework well preserves the relations between samples.
By seeking to embed samples into subspace, we show that our method can address the large-scale and out-of-sample problem.
arXiv Detail & Related papers (2020-07-11T10:57:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.