SCENE: Self-Labeled Counterfactuals for Extrapolating to Negative
Examples
- URL: http://arxiv.org/abs/2305.07984v3
- Date: Sat, 27 Jan 2024 09:16:14 GMT
- Title: SCENE: Self-Labeled Counterfactuals for Extrapolating to Negative
Examples
- Authors: Deqing Fu, Ameya Godbole, Robin Jia
- Abstract summary: Self-labeled Counterfactuals for Extrapolating to Negative Examples (SCENE) is an automatic method for synthesizing training data.
With access to only answerable training examples, SCENE can close 69.6% of the performance gap on SQuAD 2.0.
- Score: 23.77077091225583
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Detecting negatives (such as non-entailment relationships, unanswerable
questions, and false claims) is an important and challenging aspect of many
natural language understanding tasks. Though manually collecting challenging
negative examples can help models detect them, it is both costly and
domain-specific. In this work, we propose Self-labeled Counterfactuals for
Extrapolating to Negative Examples (SCENE), an automatic method for
synthesizing training data that greatly improves models' ability to detect
challenging negative examples. In contrast with standard data augmentation,
which synthesizes new examples for existing labels, SCENE can synthesize
negative examples zero-shot from only positive ones. Given a positive example,
SCENE perturbs it with a mask infilling model, then determines whether the
resulting example is negative based on a self-training heuristic. With access
to only answerable training examples, SCENE can close 69.6% of the performance
gap on SQuAD 2.0, a dataset where half of the evaluation examples are
unanswerable, compared to a model trained on SQuAD 2.0. Our method also extends
to boolean question answering and recognizing textual entailment, and improves
generalization from SQuAD to ACE-whQA, an out-of-domain extractive QA
benchmark.
Related papers
- Task-oriented Embedding Counts: Heuristic Clustering-driven Feature Fine-tuning for Whole Slide Image Classification [1.292108130501585]
We propose a clustering-driven feature fine-tuning method (HC-FT) to enhance the performance of multiple instance learning.
The proposed method is evaluated on both CAMELYON16 and BRACS datasets, achieving an AUC of 97.13% and 85.85%, respectively.
arXiv Detail & Related papers (2024-06-02T08:53:45Z) - Prompt Optimization with EASE? Efficient Ordering-aware Automated Selection of Exemplars [66.823588073584]
Large language models (LLMs) have shown impressive capabilities in real-world applications.
The quality of these exemplars in the prompt greatly impacts performance.
Existing methods fail to adequately account for the impact of exemplar ordering on the performance.
arXiv Detail & Related papers (2024-05-25T08:23:05Z) - Your Negative May not Be True Negative: Boosting Image-Text Matching
with False Negative Elimination [62.18768931714238]
We propose a novel False Negative Elimination (FNE) strategy to select negatives via sampling.
The results demonstrate the superiority of our proposed false negative elimination strategy.
arXiv Detail & Related papers (2023-08-08T16:31:43Z) - Clustering-Aware Negative Sampling for Unsupervised Sentence
Representation [24.15096466098421]
ClusterNS is a novel method that incorporates cluster information into contrastive learning for unsupervised sentence representation learning.
We apply a modified K-means clustering algorithm to supply hard negatives and recognize in-batch false negatives during training.
arXiv Detail & Related papers (2023-05-17T02:06:47Z) - Generating Negative Samples for Sequential Recommendation [83.60655196391855]
We propose to Generate Negative Samples (items) for Sequential Recommendation (SR)
A negative item is sampled at each time step based on the current SR model's learned user preferences toward items.
Experiments on four public datasets verify the importance of providing high-quality negative samples for SR.
arXiv Detail & Related papers (2022-08-07T05:44:13Z) - Exploring the Impact of Negative Samples of Contrastive Learning: A Case
Study of Sentence Embedding [14.295787044482136]
We present a momentum contrastive learning model with negative sample queue for sentence embedding, namely MoCoSE.
We define a maximum traceable distance metric, through which we learn to what extent the text contrastive learning benefits from the historical information of negative samples.
Our experiments find that the best results are obtained when the maximum traceable distance is at a certain range, demonstrating that there is an optimal range of historical information for a negative sample queue.
arXiv Detail & Related papers (2022-02-26T08:29:25Z) - Relation-aware Graph Attention Model With Adaptive Self-adversarial
Training [29.240686573485718]
This paper describes an end-to-end solution for the relationship prediction task in heterogeneous, multi-relational graphs.
We particularly address two building blocks in the pipeline, namely heterogeneous graph representation learning and negative sampling.
We introduce a parameter-free negative sampling technique -- adaptive self-adversarial (ASA) negative sampling.
arXiv Detail & Related papers (2021-02-14T16:11:56Z) - Contrastive Learning with Adversarial Perturbations for Conditional Text
Generation [49.055659008469284]
We propose a principled method to generate positive and negative samples for contrastive learning of seq2seq models.
Specifically, we generate negative examples by adding small perturbations to the input sequence to minimize its conditional likelihood.
We empirically show that our proposed method significantly improves the generalization of the seq2seq on three text generation tasks.
arXiv Detail & Related papers (2020-12-14T06:20:27Z) - Contrastive Learning with Hard Negative Samples [80.12117639845678]
We develop a new family of unsupervised sampling methods for selecting hard negative samples.
A limiting case of this sampling results in a representation that tightly clusters each class, and pushes different classes as far apart as possible.
The proposed method improves downstream performance across multiple modalities, requires only few additional lines of code to implement, and introduces no computational overhead.
arXiv Detail & Related papers (2020-10-09T14:18:53Z) - Simplify and Robustify Negative Sampling for Implicit Collaborative
Filtering [42.832851785261894]
In this paper, we first provide a novel understanding of negative instances by empirically observing that only a few instances are potentially important for model learning.
We then tackle the untouched false negative problem by favouring high-variance samples stored in memory.
Empirical results on two synthetic datasets and three real-world datasets demonstrate both robustness and superiorities of our negative sampling method.
arXiv Detail & Related papers (2020-09-07T19:08:26Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.