Plug and Play Counterfactual Text Generation for Model Robustness
- URL: http://arxiv.org/abs/2206.10429v1
- Date: Tue, 21 Jun 2022 14:25:21 GMT
- Title: Plug and Play Counterfactual Text Generation for Model Robustness
- Authors: Nishtha Madaan, Srikanta Bedathur, Diptikalyan Saha
- Abstract summary: We introduce CASPer, a plug-and-play counterfactual generation framework.
We show that CASPer effectively generates counterfactual text that follow the steering provided by an attribute model.
We also show that the generated counterfactuals can be used for augmenting the training data and thereby fixing and making the test model more robust.
- Score: 12.517365153658028
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generating counterfactual test-cases is an important backbone for testing NLP
models and making them as robust and reliable as traditional software. In
generating the test-cases, a desired property is the ability to control the
test-case generation in a flexible manner to test for a large variety of
failure cases and to explain and repair them in a targeted manner. In this
direction, significant progress has been made in the prior works by manually
writing rules for generating controlled counterfactuals. However, this approach
requires heavy manual supervision and lacks the flexibility to easily introduce
new controls. Motivated by the impressive flexibility of the plug-and-play
approach of PPLM, we propose bringing the framework of plug-and-play to
counterfactual test case generation task. We introduce CASPer, a plug-and-play
counterfactual generation framework to generate test cases that satisfy goal
attributes on demand. Our plug-and-play model can steer the test case
generation process given any attribute model without requiring
attribute-specific training of the model. In experiments, we show that CASPer
effectively generates counterfactual text that follow the steering provided by
an attribute model while also being fluent, diverse and preserving the original
content. We also show that the generated counterfactuals from CASPer can be
used for augmenting the training data and thereby fixing and making the test
model more robust.
Related papers
- CAR: Controllable Autoregressive Modeling for Visual Generation [100.33455832783416]
Controllable AutoRegressive Modeling (CAR) is a novel, plug-and-play framework that integrates conditional control into multi-scale latent variable modeling.
CAR progressively refines and captures control representations, which are injected into each autoregressive step of the pre-trained model to guide the generation process.
Our approach demonstrates excellent controllability across various types of conditions and delivers higher image quality compared to previous methods.
arXiv Detail & Related papers (2024-10-07T00:55:42Z) - SYNTHEVAL: Hybrid Behavioral Testing of NLP Models with Synthetic CheckLists [59.08999823652293]
We propose SYNTHEVAL to generate a wide range of test types for a comprehensive evaluation of NLP models.
In the last stage, human experts investigate the challenging examples, manually design templates, and identify the types of failures the taskspecific models consistently exhibit.
We apply SYNTHEVAL to two classification tasks, sentiment analysis and toxic language detection, and show that our framework is effective in identifying weaknesses of strong models on these tasks.
arXiv Detail & Related papers (2024-08-30T17:41:30Z) - Automatic Generation of Behavioral Test Cases For Natural Language Processing Using Clustering and Prompting [6.938766764201549]
This paper introduces an automated approach to develop test cases by exploiting the power of large language models and statistical techniques.
We analyze the behavioral test profiles across four different classification algorithms and discuss the limitations and strengths of those models.
arXiv Detail & Related papers (2024-07-31T21:12:21Z) - Learn from the Learnt: Source-Free Active Domain Adaptation via Contrastive Sampling and Visual Persistence [60.37934652213881]
Domain Adaptation (DA) facilitates knowledge transfer from a source domain to a related target domain.
This paper investigates a practical DA paradigm, namely Source data-Free Active Domain Adaptation (SFADA), where source data becomes inaccessible during adaptation.
We present learn from the learnt (LFTL), a novel paradigm for SFADA to leverage the learnt knowledge from the source pretrained model and actively iterated models without extra overhead.
arXiv Detail & Related papers (2024-07-26T17:51:58Z) - Test-Time Model Adaptation with Only Forward Passes [68.11784295706995]
Test-time adaptation has proven effective in adapting a given trained model to unseen test samples with potential distribution shifts.
We propose a test-time Forward-Optimization Adaptation (FOA) method.
FOA runs on quantized 8-bit ViT, outperforms gradient-based TENT on full-precision 32-bit ViT, and achieves an up to 24-fold memory reduction on ImageNet-C.
arXiv Detail & Related papers (2024-04-02T05:34:33Z) - Test Generation Strategies for Building Failure Models and Explaining
Spurious Failures [4.995172162560306]
Test inputs fail not only when the system under test is faulty but also when the inputs are invalid or unrealistic.
We propose to build failure models for inferring interpretable rules on test inputs that cause spurious failures.
We show that our proposed surrogate-assisted approach generates failure models with an average accuracy of 83%.
arXiv Detail & Related papers (2023-12-09T18:36:15Z) - Focused Prefix Tuning for Controllable Text Generation [19.88484696133778]
We propose focused prefix tuning(FPT) to mitigate the problem and to enable the control to focus on the desired attribute.
Experimental results show that FPT can achieve better control accuracy and text fluency than baseline models in single-attribute control tasks.
arXiv Detail & Related papers (2023-06-01T06:00:43Z) - Learning to Increase the Power of Conditional Randomization Tests [8.883733362171032]
The model-X conditional randomization test is a generic framework for conditional independence testing.
We introduce novel model-fitting schemes that are designed to explicitly improve the power of model-X tests.
arXiv Detail & Related papers (2022-07-03T12:29:25Z) - TTAPS: Test-Time Adaption by Aligning Prototypes using Self-Supervision [70.05605071885914]
We propose a novel modification of the self-supervised training algorithm SwAV that adds the ability to adapt to single test samples.
We show the success of our method on the common benchmark dataset CIFAR10-C.
arXiv Detail & Related papers (2022-05-18T05:43:06Z) - MEMO: Test Time Robustness via Adaptation and Augmentation [131.28104376280197]
We study the problem of test time robustification, i.e., using the test input to improve model robustness.
Recent prior works have proposed methods for test time adaptation, however, they each introduce additional assumptions.
We propose a simple approach that can be used in any test setting where the model is probabilistic and adaptable.
arXiv Detail & Related papers (2021-10-18T17:55:11Z) - CAT-Gen: Improving Robustness in NLP Models via Controlled Adversarial
Text Generation [20.27052525082402]
We present a Controlled Adversarial Text Generation (CAT-Gen) model that generates adversarial texts through controllable attributes.
Experiments on real-world NLP datasets demonstrate that our method can generate more diverse and fluent adversarial texts.
arXiv Detail & Related papers (2020-10-05T21:07:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.