A logic-based relational learning approach to relation extraction: The
OntoILPER system
- URL: http://arxiv.org/abs/2001.04192v1
- Date: Mon, 13 Jan 2020 12:47:49 GMT
- Title: A logic-based relational learning approach to relation extraction: The
OntoILPER system
- Authors: Rinaldo Lima, Bernard Espinasse (LIS, R2I), Fred Freitas
- Abstract summary: We present OntoILPER, a logic-based relational learning approach to Relation Extraction.
OntoILPER takes profit of a rich relational representation of examples, which can alleviate the drawbacks.
The proposed relational approach seems to be more suitable for Relation Extraction than statistical ones.
- Score: 0.9176056742068812
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Relation Extraction (RE), the task of detecting and characterizing semantic
relations between entities in text, has gained much importance in the last two
decades, mainly in the biomedical domain. Many papers have been published on
Relation Extraction using supervised machine learning techniques. Most of these
techniques rely on statistical methods, such as feature-based and
tree-kernels-based methods. Such statistical learning techniques are usually
based on a propositional hypothesis space for representing examples, i.e., they
employ an attribute-value representation of features. This kind of
representation has some drawbacks, particularly in the extraction of complex
relations which demand more contextual information about the involving
instances, i.e., it is not able to effectively capture structural information
from parse trees without loss of information. In this work, we present
OntoILPER, a logic-based relational learning approach to Relation Extraction
that uses Inductive Logic Programming for generating extraction models in the
form of symbolic extraction rules. OntoILPER takes profit of a rich relational
representation of examples, which can alleviate the aforementioned drawbacks.
The proposed relational approach seems to be more suitable for Relation
Extraction than statistical ones for several reasons that we argue. Moreover,
OntoILPER uses a domain ontology that guides the background knowledge
generation process and is used for storing the extracted relation instances.
The induced extraction rules were evaluated on three protein-protein
interaction datasets from the biomedical domain. The performance of OntoILPER
extraction models was compared with other state-of-the-art RE systems. The
encouraging results seem to demonstrate the effectiveness of the proposed
solution.
Related papers
- Maximizing Relation Extraction Potential: A Data-Centric Study to Unveil Challenges and Opportunities [3.8087810875611896]
This paper investigates the possible data-centric characteristics that impede neural relation extraction.
It emphasizes pivotal issues, such as contextual ambiguity, correlating relations, long-tail data, and fine-grained relation distributions.
It sets a marker for future directions to alleviate these issues, thereby proving to be a critical resource for novice and advanced researchers.
arXiv Detail & Related papers (2024-09-07T23:40:47Z) - Multi-modal Causal Structure Learning and Root Cause Analysis [67.67578590390907]
We propose Mulan, a unified multi-modal causal structure learning method for root cause localization.
We leverage a log-tailored language model to facilitate log representation learning, converting log sequences into time-series data.
We also introduce a novel key performance indicator-aware attention mechanism for assessing modality reliability and co-learning a final causal graph.
arXiv Detail & Related papers (2024-02-04T05:50:38Z) - Entity or Relation Embeddings? An Analysis of Encoding Strategies for Relation Extraction [19.019881161010474]
Relation extraction is essentially a text classification problem, which can be tackled by fine-tuning a pre-trained language model (LM)
Existing approaches therefore solve the problem in an indirect way: they fine-tune an LM to learn embeddings of the head and tail entities, and then predict the relationship from these entity embeddings.
Our hypothesis in this paper is that relation extraction models can be improved by capturing relationships in a more direct way.
arXiv Detail & Related papers (2023-12-18T09:58:19Z) - Siamese Representation Learning for Unsupervised Relation Extraction [5.776369192706107]
Unsupervised relation extraction (URE) aims at discovering underlying relations between named entity pairs from open-domain plain text.
Existing URE models utilizing contrastive learning, which attract positive samples and repulse negative samples to promote better separation, have got decent effect.
We propose Siamese Representation Learning for Unsupervised Relation Extraction -- a novel framework to simply leverage positive pairs to representation learning.
arXiv Detail & Related papers (2023-10-01T02:57:43Z) - On Neural Architecture Inductive Biases for Relational Tasks [76.18938462270503]
We introduce a simple architecture based on similarity-distribution scores which we name Compositional Network generalization (CoRelNet)
We find that simple architectural choices can outperform existing models in out-of-distribution generalizations.
arXiv Detail & Related papers (2022-06-09T16:24:01Z) - HiURE: Hierarchical Exemplar Contrastive Learning for Unsupervised
Relation Extraction [60.80849503639896]
Unsupervised relation extraction aims to extract the relationship between entities from natural language sentences without prior information on relational scope or distribution.
We propose a novel contrastive learning framework named HiURE, which has the capability to derive hierarchical signals from relational feature space using cross hierarchy attention.
Experimental results on two public datasets demonstrate the advanced effectiveness and robustness of HiURE on unsupervised relation extraction when compared with state-of-the-art models.
arXiv Detail & Related papers (2022-05-04T17:56:48Z) - Multi-Attribute Relation Extraction (MARE) -- Simplifying the
Application of Relation Extraction [3.1255943277671894]
Natural language understanding's relation extraction makes innovative and encouraging novel business concepts possible.
Current approaches allow the extraction of relations with a fixed number of entities as attributes.
We introduce multi-attribute relation extraction (MARE) as an assumption-less problem formulation with two approaches.
arXiv Detail & Related papers (2021-11-17T11:06:39Z) - SAIS: Supervising and Augmenting Intermediate Steps for Document-Level
Relation Extraction [51.27558374091491]
We propose to explicitly teach the model to capture relevant contexts and entity types by supervising and augmenting intermediate steps (SAIS) for relation extraction.
Based on a broad spectrum of carefully designed tasks, our proposed SAIS method not only extracts relations of better quality due to more effective supervision, but also retrieves the corresponding supporting evidence more accurately.
arXiv Detail & Related papers (2021-09-24T17:37:35Z) - D-REX: Dialogue Relation Extraction with Explanations [65.3862263565638]
This work focuses on extracting explanations that indicate that a relation exists while using only partially labeled data.
We propose our model-agnostic framework, D-REX, a policy-guided semi-supervised algorithm that explains and ranks relations.
We find that about 90% of the time, human annotators prefer D-REX's explanations over a strong BERT-based joint relation extraction and explanation model.
arXiv Detail & Related papers (2021-09-10T22:30:48Z) - Techniques for Jointly Extracting Entities and Relations: A Survey [31.759798455009253]
Traditionally, relation extraction is carried out after entity extraction in a "pipeline" fashion.
It was observed that jointly performing entity and relation extraction is beneficial for both the tasks.
arXiv Detail & Related papers (2021-03-10T15:18:24Z) - Learning Relation Prototype from Unlabeled Texts for Long-tail Relation
Extraction [84.64435075778988]
We propose a general approach to learn relation prototypes from unlabeled texts.
We learn relation prototypes as an implicit factor between entities.
We conduct experiments on two publicly available datasets: New York Times and Google Distant Supervision.
arXiv Detail & Related papers (2020-11-27T06:21:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.