Generative Input: Towards Next-Generation Input Methods Paradigm
- URL: http://arxiv.org/abs/2311.01166v1
- Date: Thu, 2 Nov 2023 12:01:29 GMT
- Title: Generative Input: Towards Next-Generation Input Methods Paradigm
- Authors: Keyu Ding and Yongcan Wang and Zihang Xu and Zhenzhen Jia and Shijin
Wang and Cong Liu and Enhong Chen
- Abstract summary: We propose a novel Generative Input paradigm named GeneInput.
It uses prompts to handle all input scenarios and other intelligent auxiliary input functions, optimizing the model with user feedback to deliver personalized results.
The results demonstrate that we have achieved state-of-the-art performance for the first time in the Full-mode Key-sequence to Characters(FK2C) task.
- Score: 49.98958865125018
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Since the release of ChatGPT, generative models have achieved tremendous
success and become the de facto approach for various NLP tasks. However, its
application in the field of input methods remains under-explored. Many neural
network approaches have been applied to the construction of Chinese input
method engines(IMEs).Previous research often assumed that the input pinyin was
correct and focused on Pinyin-to-character(P2C) task, which significantly falls
short of meeting users' demands. Moreover, previous research could not leverage
user feedback to optimize the model and provide personalized results. In this
study, we propose a novel Generative Input paradigm named GeneInput. It uses
prompts to handle all input scenarios and other intelligent auxiliary input
functions, optimizing the model with user feedback to deliver personalized
results. The results demonstrate that we have achieved state-of-the-art
performance for the first time in the Full-mode Key-sequence to
Characters(FK2C) task. We propose a novel reward model training method that
eliminates the need for additional manual annotations and the performance
surpasses GPT-4 in tasks involving intelligent association and conversational
assistance. Compared to traditional paradigms, GeneInput not only demonstrates
superior performance but also exhibits enhanced robustness, scalability, and
online learning capabilities.
Related papers
- MetaKP: On-Demand Keyphrase Generation [52.48698290354449]
We introduce on-demand keyphrase generation, a novel paradigm that requires keyphrases that conform to specific high-level goals or intents.
We present MetaKP, a large-scale benchmark comprising four datasets, 7500 documents, and 3760 goals across news and biomedical domains with human-annotated keyphrases.
We demonstrate the potential of our method to serve as a general NLP infrastructure, exemplified by its application in epidemic event detection from social media.
arXiv Detail & Related papers (2024-06-28T19:02:59Z) - Contrastive Transformer Learning with Proximity Data Generation for
Text-Based Person Search [60.626459715780605]
Given a descriptive text query, text-based person search aims to retrieve the best-matched target person from an image gallery.
Such a cross-modal retrieval task is quite challenging due to significant modality gap, fine-grained differences and insufficiency of annotated data.
In this paper, we propose a simple yet effective dual Transformer model for text-based person search.
arXiv Detail & Related papers (2023-11-15T16:26:49Z) - Automating Human Tutor-Style Programming Feedback: Leveraging GPT-4 Tutor Model for Hint Generation and GPT-3.5 Student Model for Hint Validation [25.317788211120362]
We investigate the role of generative AI models in providing human tutor-style programming hints.
Recent works have benchmarked state-of-the-art models for various feedback generation scenarios.
We develop a novel technique, GPT4Hints-GPT3.5Val, to push the limits of generative AI models.
arXiv Detail & Related papers (2023-10-05T17:02:59Z) - Input-Tuning: Adapting Unfamiliar Inputs to Frozen Pretrained Models [82.75572875007755]
We argue that one of the factors hindering the development of prompt-tuning on NLG tasks is the unfamiliar inputs.
This motivates us to propose input-tuning, which fine-tunes both the continuous prompts and the input representations.
Our proposed input-tuning is conceptually simple and empirically powerful.
arXiv Detail & Related papers (2022-03-07T05:04:32Z) - RoMA: Robust Model Adaptation for Offline Model-based Optimization [115.02677045518692]
We consider the problem of searching an input maximizing a black-box objective function given a static dataset of input-output queries.
A popular approach to solving this problem is maintaining a proxy model that approximates the true objective function.
Here, the main challenge is how to avoid adversarially optimized inputs during the search.
arXiv Detail & Related papers (2021-10-27T05:37:12Z) - Generative Adversarial Networks for Annotated Data Augmentation in Data
Sparse NLU [0.76146285961466]
Data sparsity is one of the key challenges associated with model development in Natural Language Understanding.
We present our results on boosting NLU model performance through training data augmentation using a sequential generative adversarial network (GAN)
Our experiments reveal synthetic data generated using the sequential generative adversarial network provides significant performance boosts across multiple metrics.
arXiv Detail & Related papers (2020-12-09T20:38:17Z) - Unsupervised Paraphrasing with Pretrained Language Models [85.03373221588707]
We propose a training pipeline that enables pre-trained language models to generate high-quality paraphrases in an unsupervised setting.
Our recipe consists of task-adaptation, self-supervision, and a novel decoding algorithm named Dynamic Blocking.
We show with automatic and human evaluations that our approach achieves state-of-the-art performance on both the Quora Question Pair and the ParaNMT datasets.
arXiv Detail & Related papers (2020-10-24T11:55:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.