RGCL at SemEval-2020 Task 6: Neural Approaches to Definition Extraction
- URL: http://arxiv.org/abs/2010.06281v1
- Date: Tue, 13 Oct 2020 10:48:15 GMT
- Title: RGCL at SemEval-2020 Task 6: Neural Approaches to Definition Extraction
- Authors: Tharindu Ranasinghe, Alistair Plum, Constantin Orasan, Ruslan Mitkov
- Abstract summary: This paper presents the RGCL team submission to SemEval 2020 Task 6: DeftEval, subtasks 1 and 2.
The system classifies definitions at the sentence and token levels.
- Score: 12.815346389235748
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents the RGCL team submission to SemEval 2020 Task 6:
DeftEval, subtasks 1 and 2. The system classifies definitions at the sentence
and token levels. It utilises state-of-the-art neural network architectures,
which have some task-specific adaptations, including an automatically extended
training set. Overall, the approach achieves acceptable evaluation scores,
while maintaining flexibility in architecture selection.
Related papers
- OWL2Vec4OA: Tailoring Knowledge Graph Embeddings for Ontology Alignment [14.955861200588664]
This paper proposes OWL2Vec4OA, an extension of the embedding system OWL2Vec*.
We present the theoretical foundations, implementation details, and experimental evaluation of our proposed extension.
arXiv Detail & Related papers (2024-08-12T17:24:19Z) - Auto-Train-Once: Controller Network Guided Automatic Network Pruning from Scratch [72.26822499434446]
Auto-Train-Once (ATO) is an innovative network pruning algorithm designed to automatically reduce the computational and storage costs of DNNs.
We provide a comprehensive convergence analysis as well as extensive experiments, and the results show that our approach achieves state-of-the-art performance across various model architectures.
arXiv Detail & Related papers (2024-03-21T02:33:37Z) - Verifiable Reinforcement Learning Systems via Compositionality [19.316487056356298]
We propose a framework for verifiable and compositional reinforcement learning (RL) in which a collection of RL subsystems are composed to achieve an overall task.
We present theoretical results guaranteeing that if each subsystem learns a policy satisfying its subtask specification, then its composition is guaranteed to satisfy the overall task specification.
We present a method, formulated as the problem of finding an optimal set of parameters in the high-level model, to automatically update the subtask specifications to account for the observed shortcomings.
arXiv Detail & Related papers (2023-09-09T17:11:44Z) - PDSketch: Integrated Planning Domain Programming and Learning [86.07442931141637]
We present a new domain definition language, named PDSketch.
It allows users to flexibly define high-level structures in the transition models.
Details of the transition model will be filled in by trainable neural networks.
arXiv Detail & Related papers (2023-03-09T18:54:12Z) - Discrete Factorial Representations as an Abstraction for Goal
Conditioned Reinforcement Learning [99.38163119531745]
We show that applying a discretizing bottleneck can improve performance in goal-conditioned RL setups.
We experimentally prove the expected return on out-of-distribution goals, while still allowing for specifying goals with expressive structure.
arXiv Detail & Related papers (2022-11-01T03:31:43Z) - Combining Modular Skills in Multitask Learning [149.8001096811708]
A modular design encourages neural models to disentangle and recombine different facets of knowledge to generalise more systematically to new tasks.
In this work, we assume each task is associated with a subset of latent discrete skills from a (potentially small) inventory.
We find that the modular design of a network significantly increases sample efficiency in reinforcement learning and few-shot generalisation in supervised learning.
arXiv Detail & Related papers (2022-02-28T16:07:19Z) - Verifiable and Compositional Reinforcement Learning Systems [19.614913673879474]
The framework consists of a high-level model, represented as a parametric Markov decision process (pMDP)
By defining interfaces between the sub-systems, the framework enables automatic decompositons of task specifications.
We present a method, formulated as the problem of finding an optimal set of parameters in the pMDP, to automatically update the sub-task specifications.
arXiv Detail & Related papers (2021-06-07T17:05:14Z) - JokeMeter at SemEval-2020 Task 7: Convolutional humor [6.853018135783218]
This paper describes our system that was designed for Humor evaluation within the SemEval-2020 Task 7.
The system is based on convolutional neural network architecture.
arXiv Detail & Related papers (2020-08-25T14:27:58Z) - Neural networks adapting to datasets: learning network size and topology [77.34726150561087]
We introduce a flexible setup allowing for a neural network to learn both its size and topology during the course of a gradient-based training.
The resulting network has the structure of a graph tailored to the particular learning task and dataset.
arXiv Detail & Related papers (2020-06-22T12:46:44Z) - Yseop at SemEval-2020 Task 5: Cascaded BERT Language Model for
Counterfactual Statement Analysis [0.0]
We use a BERT base model for the classification task and build a hybrid BERT Multi-Layer Perceptron system to handle the sequence identification task.
Our experiments show that while introducing syntactic and semantic features does little in improving the system in the classification task, using these types of features as cascaded linear inputs to fine-tune the sequence-delimiting ability of the model ensures it outperforms other similar-purpose complex systems like BiLSTM-CRF in the second task.
arXiv Detail & Related papers (2020-05-18T08:19:18Z) - Progressive Graph Convolutional Networks for Semi-Supervised Node
Classification [97.14064057840089]
Graph convolutional networks have been successful in addressing graph-based tasks such as semi-supervised node classification.
We propose a method to automatically build compact and task-specific graph convolutional networks.
arXiv Detail & Related papers (2020-03-27T08:32:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.