AvgOut: A Simple Output-Probability Measure to Eliminate Dull Responses
- URL: http://arxiv.org/abs/2001.05467v1
- Date: Wed, 15 Jan 2020 18:32:06 GMT
- Title: AvgOut: A Simple Output-Probability Measure to Eliminate Dull Responses
- Authors: Tong Niu, Mohit Bansal
- Abstract summary: We build dialogue models that are dynamically aware of what utterances or tokens are dull without any feature-engineering.
The first model, MinAvgOut, directly maximizes the diversity score through the output distributions of each batch.
The second model, Label Fine-Tuning (LFT), prepends to the source sequence a label continuously scaled by the diversity score to control the diversity level.
The third model, RL, adopts Reinforcement Learning and treats the diversity score as a reward signal.
- Score: 97.50616524350123
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many sequence-to-sequence dialogue models tend to generate safe,
uninformative responses. There have been various useful efforts on trying to
eliminate them. However, these approaches either improve decoding algorithms
during inference, rely on hand-crafted features, or employ complex models. In
our work, we build dialogue models that are dynamically aware of what
utterances or tokens are dull without any feature-engineering. Specifically, we
start with a simple yet effective automatic metric, AvgOut, which calculates
the average output probability distribution of all time steps on the decoder
side during training. This metric directly estimates which tokens are more
likely to be generated, thus making it a faithful evaluation of the model
diversity (i.e., for diverse models, the token probabilities should be more
evenly distributed rather than peaked at a few dull tokens). We then leverage
this novel metric to propose three models that promote diversity without losing
relevance. The first model, MinAvgOut, directly maximizes the diversity score
through the output distributions of each batch; the second model, Label
Fine-Tuning (LFT), prepends to the source sequence a label continuously scaled
by the diversity score to control the diversity level; the third model, RL,
adopts Reinforcement Learning and treats the diversity score as a reward
signal. Moreover, we experiment with a hybrid model by combining the loss terms
of MinAvgOut and RL. All four models outperform their base LSTM-RNN model on
both diversity and relevance by a large margin, and are comparable to or better
than competitive baselines (also verified via human evaluation). Moreover, our
approaches are orthogonal to the base model, making them applicable as an
add-on to other emerging better dialogue models in the future.
Related papers
- Exact Byte-Level Probabilities from Tokenized Language Models for FIM-Tasks and Model Ensembles [23.134664392314264]
Tokenization is associated with many poorly understood shortcomings in language models (LM)
This work studies how tokenization impacts model performance by analyzing and comparing models with their byte-level counterparts.
We develop a next-byte sampling algorithm that eliminates tokenization bias without requiring further training or optimization.
arXiv Detail & Related papers (2024-10-11T23:30:42Z) - HM3: Heterogeneous Multi-Class Model Merging [0.0]
We explore training-free model merging techniques to consolidate auxiliary guard-rail models into a single, multi-functional model.
We propose Heterogeneous Multi-Class Model Merging (HM3) as a simple technique for merging multi-class classifiers with heterogeneous label spaces.
We report promising results for merging BERT-based guard models, some of which attain an average F1-score higher than the source models while reducing the inference time by up to 44%.
arXiv Detail & Related papers (2024-09-27T22:42:45Z) - Quantile Regression for Distributional Reward Models in RLHF [1.8130068086063336]
We introduce Quantile Reward Models (QRMs), a novel approach to reward modeling that learns a distribution over rewards instead of a single scalar value.
Our method uses quantile regression to estimate a full, potentially multimodal distribution over preferences, providing a more powerful and nuanced representation of preferences.
Our experimental results show that QRM outperforms comparable traditional point-estimate models on RewardBench.
arXiv Detail & Related papers (2024-09-16T10:54:04Z) - Promises and Pitfalls of Generative Masked Language Modeling: Theoretical Framework and Practical Guidelines [74.42485647685272]
We focus on Generative Masked Language Models (GMLMs)
We train a model to fit conditional probabilities of the data distribution via masking, which are subsequently used as inputs to a Markov Chain to draw samples from the model.
We adapt the T5 model for iteratively-refined parallel decoding, achieving 2-3x speedup in machine translation with minimal sacrifice in quality.
arXiv Detail & Related papers (2024-07-22T18:00:00Z) - Towards a Generalist and Blind RGB-X Tracker [91.36268768952755]
We develop a single model tracker that can remain blind to any modality X during inference time.
Our training process is extremely simple, integrating multi-label classification loss with a routing function.
Our generalist and blind tracker can achieve competitive performance compared to well-established modal-specific models.
arXiv Detail & Related papers (2024-05-28T03:00:58Z) - EMR-Merging: Tuning-Free High-Performance Model Merging [55.03509900949149]
We show that Elect, Mask & Rescale-Merging (EMR-Merging) shows outstanding performance compared to existing merging methods.
EMR-Merging is tuning-free, thus requiring no data availability or any additional training while showing impressive performance.
arXiv Detail & Related papers (2024-05-23T05:25:45Z) - Dataless Knowledge Fusion by Merging Weights of Language Models [51.8162883997512]
Fine-tuning pre-trained language models has become the prevalent paradigm for building downstream NLP models.
This creates a barrier to fusing knowledge across individual models to yield a better single model.
We propose a dataless knowledge fusion method that merges models in their parameter space.
arXiv Detail & Related papers (2022-12-19T20:46:43Z) - Identifying and Mitigating Spurious Correlations for Improving
Robustness in NLP Models [19.21465581259624]
Many problems can be attributed to models exploiting spurious correlations, or shortcuts between the training data and the task labels.
In this paper, we aim to automatically identify such spurious correlations in NLP models at scale.
We show that our proposed method can effectively and efficiently identify a scalable set of "shortcuts", and mitigating these leads to more robust models in multiple applications.
arXiv Detail & Related papers (2021-10-14T21:40:03Z) - On the Discrepancy between Density Estimation and Sequence Generation [92.70116082182076]
log-likelihood is highly correlated with BLEU when we consider models within the same family.
We observe no correlation between rankings of models across different families.
arXiv Detail & Related papers (2020-02-17T20:13:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.