COVCOR20 at WNUT-2020 Task 2: An Attempt to Combine Deep Learning and
Expert rules
- URL: http://arxiv.org/abs/2009.03191v1
- Date: Mon, 7 Sep 2020 15:54:23 GMT
- Title: COVCOR20 at WNUT-2020 Task 2: An Attempt to Combine Deep Learning and
Expert rules
- Authors: Ali H\"urriyeto\u{g}lu and Ali Safaya and Nelleke Oostdijk and Osman
Mutlu and Erdem Y\"or\"uk
- Abstract summary: In the scope of WNUT-2020 Task 2, we developed various text classification systems, using deep learning models and one using linguistically informed rules.
While both of the deep learning systems outperformed the system using the linguistically informed rules, we found that through the integration of (the output of) the three systems a better performance could be achieved.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the scope of WNUT-2020 Task 2, we developed various text classification
systems, using deep learning models and one using linguistically informed
rules. While both of the deep learning systems outperformed the system using
the linguistically informed rules, we found that through the integration of
(the output of) the three systems a better performance could be achieved than
the standalone performance of each approach in a cross-validation setting.
However, on the test data the performance of the integration was slightly lower
than our best performing deep learning model. These results hardly indicate any
progress in line of integrating machine learning and expert rules driven
systems. We expect that the release of the annotation manuals and gold labels
of the test data after this workshop will shed light on these perplexing
results.
Related papers
- Pronunciation Assessment with Multi-modal Large Language Models [10.35401596425946]
We propose a scoring system based on large language models (LLMs)
The speech encoder first maps the learner's speech into contextual features.
The adapter layer then transforms these features to align with the text embedding in latent space.
arXiv Detail & Related papers (2024-07-12T12:16:14Z) - Semi-adaptive Synergetic Two-way Pseudoinverse Learning System [8.16000189123978]
We propose a semi-adaptive synergetic two-way pseudoinverse learning system.
Each subsystem encompasses forward learning, backward learning, and feature concatenation modules.
The whole system is trained using a non-gradient descent learning algorithm.
arXiv Detail & Related papers (2024-06-27T06:56:46Z) - One-Shot Learning as Instruction Data Prospector for Large Language Models [108.81681547472138]
textscNuggets uses one-shot learning to select high-quality instruction data from extensive datasets.
We show that instruction tuning with the top 1% of examples curated by textscNuggets substantially outperforms conventional methods employing the entire dataset.
arXiv Detail & Related papers (2023-12-16T03:33:12Z) - Hybrid Rule-Neural Coreference Resolution System based on Actor-Critic
Learning [53.73316523766183]
Coreference resolution systems need to tackle two main tasks.
One task is to detect all of the potential mentions, the other is to learn the linking of an antecedent for each possible mention.
We propose a hybrid rule-neural coreference resolution system based on actor-critic learning.
arXiv Detail & Related papers (2022-12-20T08:55:47Z) - NEVIS'22: A Stream of 100 Tasks Sampled from 30 Years of Computer Vision
Research [96.53307645791179]
We introduce the Never-Ending VIsual-classification Stream (NEVIS'22), a benchmark consisting of a stream of over 100 visual classification tasks.
Despite being limited to classification, the resulting stream has a rich diversity of tasks from OCR, to texture analysis, scene recognition, and so forth.
Overall, NEVIS'22 poses an unprecedented challenge for current sequential learning approaches due to the scale and diversity of tasks.
arXiv Detail & Related papers (2022-11-15T18:57:46Z) - What are the best systems? New perspectives on NLP Benchmarking [10.27421161397197]
We propose a new procedure to rank systems based on their performance across different tasks.
Motivated by the social choice theory, the final system ordering is obtained through aggregating the rankings induced by each task.
We show that our method yields different conclusions on state-of-the-art systems than the mean-aggregation procedure.
arXiv Detail & Related papers (2022-02-08T11:44:20Z) - SLIP: Self-supervision meets Language-Image Pre-training [79.53764315471543]
We study whether self-supervised learning can aid in the use of language supervision for visual representation learning.
We introduce SLIP, a multi-task learning framework for combining self-supervised learning and CLIP pre-training.
We find that SLIP enjoys the best of both worlds: better performance than self-supervision and language supervision.
arXiv Detail & Related papers (2021-12-23T18:07:13Z) - Knowledge-Aware Meta-learning for Low-Resource Text Classification [87.89624590579903]
This paper studies a low-resource text classification problem and bridges the gap between meta-training and meta-testing tasks.
We propose KGML to introduce additional representation for each sentence learned from the extracted sentence-specific knowledge graph.
arXiv Detail & Related papers (2021-09-10T07:20:43Z) - DAGA: Data Augmentation with a Generation Approach for Low-resource
Tagging Tasks [88.62288327934499]
We propose a novel augmentation method with language models trained on the linearized labeled sentences.
Our method is applicable to both supervised and semi-supervised settings.
arXiv Detail & Related papers (2020-11-03T07:49:15Z) - Building One-Shot Semi-supervised (BOSS) Learning up to Fully Supervised
Performance [0.0]
We show the potential for building one-shot semi-supervised (BOSS) learning on Cifar-10 and SVHN.
Our method combines class prototype refining, class balancing, and self-training.
Rigorous empirical evaluations provide evidence that labeling large datasets is not necessary for training deep neural networks.
arXiv Detail & Related papers (2020-06-16T17:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.