Expertise Style Transfer: A New Task Towards Better Communication
between Experts and Laymen
- URL: http://arxiv.org/abs/2005.00701v1
- Date: Sat, 2 May 2020 04:50:20 GMT
- Title: Expertise Style Transfer: A New Task Towards Better Communication
between Experts and Laymen
- Authors: Yixin Cao, Ruihao Shui, Liangming Pan, Min-Yen Kan, Zhiyuan Liu and
Tat-Seng Chua
- Abstract summary: We propose a new task of expertise style transfer and contribute a manually annotated dataset.
Solving this task not only simplifies the professional language, but also improves the accuracy and expertise level of laymen descriptions.
We establish the benchmark performance of five state-of-the-art models for style transfer and text simplification.
- Score: 88.30492014778943
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The curse of knowledge can impede communication between experts and laymen.
We propose a new task of expertise style transfer and contribute a manually
annotated dataset with the goal of alleviating such cognitive biases. Solving
this task not only simplifies the professional language, but also improves the
accuracy and expertise level of laymen descriptions using simple words. This is
a challenging task, unaddressed in previous work, as it requires the models to
have expert intelligence in order to modify text with a deep understanding of
domain knowledge and structures. We establish the benchmark performance of five
state-of-the-art models for style transfer and text simplification. The results
demonstrate a significant gap between machine and human performance. We also
discuss the challenges of automatic evaluation, to provide insights into future
research directions. The dataset is publicly available at
https://srhthu.github.io/expertise-style-transfer.
Related papers
- Harnessing the Intrinsic Knowledge of Pretrained Language Models for Challenging Text Classification Settings [5.257719744958367]
This thesis explores three challenging settings in text classification by leveraging the intrinsic knowledge of pretrained language models (PLMs)
We develop models that utilize features based on contextualized word representations from PLMs, achieving performance that rivals or surpasses human accuracy.
Lastly, we tackle the sensitivity of large language models to in-context learning prompts by selecting effective demonstrations.
arXiv Detail & Related papers (2024-08-28T09:07:30Z) - Blending Reward Functions via Few Expert Demonstrations for Faithful and
Accurate Knowledge-Grounded Dialogue Generation [22.38338205905379]
We leverage reinforcement learning algorithms to overcome the above challenges by introducing a novel reward function.
Our reward function combines an accuracy metric and a faithfulness metric to provide a balanced quality judgment of generated responses.
arXiv Detail & Related papers (2023-11-02T02:42:41Z) - Pre-training Multi-task Contrastive Learning Models for Scientific
Literature Understanding [52.723297744257536]
Pre-trained language models (LMs) have shown effectiveness in scientific literature understanding tasks.
We propose a multi-task contrastive learning framework, SciMult, to facilitate common knowledge sharing across different literature understanding tasks.
arXiv Detail & Related papers (2023-05-23T16:47:22Z) - Leveraging Natural Supervision for Language Representation Learning and
Generation [8.083109555490475]
We describe three lines of work that seek to improve the training and evaluation of neural models using naturally-occurring supervision.
We first investigate self-supervised training losses to help enhance the performance of pretrained language models for various NLP tasks.
We propose a framework that uses paraphrase pairs to disentangle semantics and syntax in sentence representations.
arXiv Detail & Related papers (2022-07-21T17:26:03Z) - Leveraging Visual Knowledge in Language Tasks: An Empirical Study on
Intermediate Pre-training for Cross-modal Knowledge Transfer [61.34424171458634]
We study whether integrating visual knowledge into a language model can fill the gap.
Our experiments show that visual knowledge transfer can improve performance in both low-resource and fully supervised settings.
arXiv Detail & Related papers (2022-03-14T22:02:40Z) - Adversarial Imitation Learning with Trajectorial Augmentation and
Correction [61.924411952657756]
We introduce a novel augmentation method which preserves the success of the augmented trajectories.
We develop an adversarial data augmented imitation architecture to train an imitation agent using synthetic experts.
Experiments show that our data augmentation strategy can improve accuracy and convergence time of adversarial imitation.
arXiv Detail & Related papers (2021-03-25T14:49:32Z) - K-XLNet: A General Method for Combining Explicit Knowledge with Language
Model Pretraining [5.178964604577459]
We focus on improving model pretraining by leveraging explicit knowledge.
To be specific, we first match knowledge facts from knowledge graph (KG) and then add a knowledge injunction layer to transformer directly.
The experimental results show that solely by adding external knowledge to transformer can improve the learning performance on many NLP tasks.
arXiv Detail & Related papers (2021-03-25T06:14:18Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z) - Knowledge Guided Metric Learning for Few-Shot Text Classification [22.832467388279873]
We propose to introduce external knowledge into few-shot learning to imitate human knowledge.
Inspired by human intelligence, we propose to introduce external knowledge into few-shot learning to imitate human knowledge.
We demonstrate that our method outperforms the state-of-the-art few-shot text classification models.
arXiv Detail & Related papers (2020-04-04T10:56:26Z) - Exploring the Limits of Transfer Learning with a Unified Text-to-Text
Transformer [64.22926988297685]
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP)
In this paper, we explore the landscape of introducing transfer learning techniques for NLP by a unified framework that converts all text-based language problems into a text-to-text format.
arXiv Detail & Related papers (2019-10-23T17:37:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.