Adversarial Learning for Supervised and Semi-supervised Relation
Extraction in Biomedical Literature
- URL: http://arxiv.org/abs/2005.04277v2
- Date: Fri, 25 Sep 2020 15:21:50 GMT
- Title: Adversarial Learning for Supervised and Semi-supervised Relation
Extraction in Biomedical Literature
- Authors: Peng Su and K. Vijay-Shanker
- Abstract summary: Adversarial training is a technique of improving model performance by involving adversarial examples in the training process.
In this paper, we investigate adversarial training with multiple adversarial examples to benefit the relation extraction task.
We also apply adversarial training technique in semi-supervised scenarios to utilize unlabeled data.
- Score: 2.8881198461098894
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial training is a technique of improving model performance by
involving adversarial examples in the training process. In this paper, we
investigate adversarial training with multiple adversarial examples to benefit
the relation extraction task. We also apply adversarial training technique in
semi-supervised scenarios to utilize unlabeled data. The evaluation results on
protein-protein interaction and protein subcellular localization task
illustrate adversarial training provides improvement on the supervised model,
and is also effective on involving unlabeled data in the semi-supervised
training case. In addition, our method achieves state-of-the-art performance on
two benchmarking datasets.
Related papers
- Distilled Datamodel with Reverse Gradient Matching [74.75248610868685]
We introduce an efficient framework for assessing data impact, comprising offline training and online evaluation stages.
Our proposed method achieves comparable model behavior evaluation while significantly speeding up the process compared to the direct retraining method.
arXiv Detail & Related papers (2024-04-22T09:16:14Z) - SmurfCat at SemEval-2024 Task 6: Leveraging Synthetic Data for Hallucination Detection [51.99159169107426]
We present our novel systems developed for the SemEval-2024 hallucination detection task.
Our investigation spans a range of strategies to compare model predictions with reference standards.
We introduce three distinct methods that exhibit strong performance metrics.
arXiv Detail & Related papers (2024-04-09T09:03:44Z) - Tools for Verifying Neural Models' Training Data [29.322899317216407]
"Proof-of-Training-Data" allows a model trainer to convince a Verifier of the training data that produced a set of model weights.
We show experimentally that our verification procedures can catch a wide variety of attacks.
arXiv Detail & Related papers (2023-07-02T23:27:00Z) - Training Data Attribution for Diffusion Models [1.1733780065300188]
We propose a novel solution that reveals how training data influence the output of diffusion models through the use of ensembles.
In our approach individual models in an encoded ensemble are trained on carefully engineered splits of the overall training data to permit the identification of influential training examples.
The resulting model ensembles enable efficient ablation of training data influence, allowing us to assess the impact of training data on model outputs.
arXiv Detail & Related papers (2023-06-03T18:36:12Z) - On the Impact of Hard Adversarial Instances on Overfitting in Adversarial Training [70.82725772926949]
Adversarial training is a popular method to robustify models against adversarial attacks.
In this work, we investigate this phenomenon from the perspective of training instances.
We show that the decay in generalization performance of adversarial training is a result of fitting hard adversarial instances.
arXiv Detail & Related papers (2021-12-14T12:19:24Z) - Improving Gradient-based Adversarial Training for Text Classification by
Contrastive Learning and Auto-Encoder [18.375585982984845]
We focus on enhancing the model's ability to defend gradient-based adversarial attack during the model's training process.
We propose two novel adversarial training approaches: CARL and RAR.
Experiments show that the proposed two approaches outperform strong baselines on various text classification datasets.
arXiv Detail & Related papers (2021-09-14T09:08:58Z) - Delving into Data: Effectively Substitute Training for Black-box Attack [84.85798059317963]
We propose a novel perspective substitute training that focuses on designing the distribution of data used in the knowledge stealing process.
The combination of these two modules can further boost the consistency of the substitute model and target model, which greatly improves the effectiveness of adversarial attack.
arXiv Detail & Related papers (2021-04-26T07:26:29Z) - Self-supervised Co-training for Video Representation Learning [103.69904379356413]
We investigate the benefit of adding semantic-class positives to instance-based Info Noise Contrastive Estimation training.
We propose a novel self-supervised co-training scheme to improve the popular infoNCE loss.
We evaluate the quality of the learnt representation on two different downstream tasks: action recognition and video retrieval.
arXiv Detail & Related papers (2020-10-19T17:59:01Z) - Improved Noise and Attack Robustness for Semantic Segmentation by Using
Multi-Task Training with Self-Supervised Depth Estimation [39.99513327031499]
We propose to improve robustness by a multi-task training, which extends supervised semantic segmentation by a self-supervised monocular depth estimation on unlabeled videos.
We show the effectiveness of our method on the Cityscapes dataset, where our multi-task training approach consistently outperforms the single-task semantic segmentation baseline.
arXiv Detail & Related papers (2020-04-23T11:03:56Z) - Adversarial Training for Aspect-Based Sentiment Analysis with BERT [3.5493798890908104]
We propose a novel architecture called BERT Adrial Training (BAT) to utilize adversarial training in sentiment analysis.
The proposed model outperforms post-trained BERT in both tasks.
To the best of our knowledge, this is the first study on the application of adversarial training in ABSA.
arXiv Detail & Related papers (2020-01-30T13:53:58Z) - Efficient Adversarial Training with Transferable Adversarial Examples [58.62766224452761]
We show that there is high transferability between models from neighboring epochs in the same training process.
We propose a novel method, Adversarial Training with Transferable Adversarial Examples (ATTA) that can enhance the robustness of trained models.
arXiv Detail & Related papers (2019-12-27T03:05:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.