AvgOut: A Simple Output-Probability Measure to Eliminate Dull Responses
- URL: http://arxiv.org/abs/2001.05467v1
- Date: Wed, 15 Jan 2020 18:32:06 GMT
- Title: AvgOut: A Simple Output-Probability Measure to Eliminate Dull Responses
- Authors: Tong Niu, Mohit Bansal
- Abstract summary: We build dialogue models that are dynamically aware of what utterances or tokens are dull without any feature-engineering.
The first model, MinAvgOut, directly maximizes the diversity score through the output distributions of each batch.
The second model, Label Fine-Tuning (LFT), prepends to the source sequence a label continuously scaled by the diversity score to control the diversity level.
The third model, RL, adopts Reinforcement Learning and treats the diversity score as a reward signal.
- Score: 97.50616524350123
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many sequence-to-sequence dialogue models tend to generate safe,
uninformative responses. There have been various useful efforts on trying to
eliminate them. However, these approaches either improve decoding algorithms
during inference, rely on hand-crafted features, or employ complex models. In
our work, we build dialogue models that are dynamically aware of what
utterances or tokens are dull without any feature-engineering. Specifically, we
start with a simple yet effective automatic metric, AvgOut, which calculates
the average output probability distribution of all time steps on the decoder
side during training. This metric directly estimates which tokens are more
likely to be generated, thus making it a faithful evaluation of the model
diversity (i.e., for diverse models, the token probabilities should be more
evenly distributed rather than peaked at a few dull tokens). We then leverage
this novel metric to propose three models that promote diversity without losing
relevance. The first model, MinAvgOut, directly maximizes the diversity score
through the output distributions of each batch; the second model, Label
Fine-Tuning (LFT), prepends to the source sequence a label continuously scaled
by the diversity score to control the diversity level; the third model, RL,
adopts Reinforcement Learning and treats the diversity score as a reward
signal. Moreover, we experiment with a hybrid model by combining the loss terms
of MinAvgOut and RL. All four models outperform their base LSTM-RNN model on
both diversity and relevance by a large margin, and are comparable to or better
than competitive baselines (also verified via human evaluation). Moreover, our
approaches are orthogonal to the base model, making them applicable as an
add-on to other emerging better dialogue models in the future.
Related papers
- Towards a Generalist and Blind RGB-X Tracker [91.36268768952755]
We develop a single model tracker that can remain blind to any modality X during inference time.
Our training process is extremely simple, integrating multi-label classification loss with a routing function.
Our generalist and blind tracker can achieve competitive performance compared to well-established modal-specific models.
arXiv Detail & Related papers (2024-05-28T03:00:58Z) - EMR-Merging: Tuning-Free High-Performance Model Merging [55.03509900949149]
We show that Elect, Mask & Rescale-Merging (EMR-Merging) shows outstanding performance compared to existing merging methods.
EMR-Merging is tuning-free, thus requiring no data availability or any additional training while showing impressive performance.
arXiv Detail & Related papers (2024-05-23T05:25:45Z) - AdaMerging: Adaptive Model Merging for Multi-Task Learning [68.75885518081357]
This paper introduces an innovative technique called Adaptive Model Merging (AdaMerging)
It aims to autonomously learn the coefficients for model merging, either in a task-wise or layer-wise manner, without relying on the original training data.
Compared to the current state-of-the-art task arithmetic merging scheme, AdaMerging showcases a remarkable 11% improvement in performance.
arXiv Detail & Related papers (2023-10-04T04:26:33Z) - Dataless Knowledge Fusion by Merging Weights of Language Models [51.8162883997512]
Fine-tuning pre-trained language models has become the prevalent paradigm for building downstream NLP models.
This creates a barrier to fusing knowledge across individual models to yield a better single model.
We propose a dataless knowledge fusion method that merges models in their parameter space.
arXiv Detail & Related papers (2022-12-19T20:46:43Z) - MEGA: Model Stealing via Collaborative Generator-Substitute Networks [4.065949099860426]
Recent data-free model stealingmethods are shown effective to extract the knowledge of thetarget model without using real query examples.
We propose a data-free model stealing frame-work,MEGA, which is based on collaborative generator-substitute networks.
Our results show that theaccuracy of our trained substitute model and the adversarialattack success rate over it can be up to 33% and 40% higherthan state-of-the-art data-free black-box attacks.
arXiv Detail & Related papers (2022-01-31T09:34:28Z) - Identifying and Mitigating Spurious Correlations for Improving
Robustness in NLP Models [19.21465581259624]
Many problems can be attributed to models exploiting spurious correlations, or shortcuts between the training data and the task labels.
In this paper, we aim to automatically identify such spurious correlations in NLP models at scale.
We show that our proposed method can effectively and efficiently identify a scalable set of "shortcuts", and mitigating these leads to more robust models in multiple applications.
arXiv Detail & Related papers (2021-10-14T21:40:03Z) - Model-based micro-data reinforcement learning: what are the crucial
model properties and which model to choose? [0.2836066255205732]
We contribute to micro-data model-based reinforcement learning (MBRL) by rigorously comparing popular generative models.
We find that on an environment that requires multimodal posterior predictives, mixture density nets outperform all other models by a large margin.
We also found that deterministic models are on par, in fact they consistently (although non-significantly) outperform their probabilistic counterparts.
arXiv Detail & Related papers (2021-07-24T11:38:25Z) - On the Discrepancy between Density Estimation and Sequence Generation [92.70116082182076]
log-likelihood is highly correlated with BLEU when we consider models within the same family.
We observe no correlation between rankings of models across different families.
arXiv Detail & Related papers (2020-02-17T20:13:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.