When Large Language Models Meet Personalization: Perspectives of
Challenges and Opportunities
- URL: http://arxiv.org/abs/2307.16376v1
- Date: Mon, 31 Jul 2023 02:48:56 GMT
- Title: When Large Language Models Meet Personalization: Perspectives of
Challenges and Opportunities
- Authors: Jin Chen, Zheng Liu, Xu Huang, Chenwang Wu, Qi Liu, Gangwei Jiang,
Yuanhao Pu, Yuxuan Lei, Xiaolong Chen, Xingmei Wang, Defu Lian and Enhong
Chen
- Abstract summary: The capability of large language models has been dramatically improved.
Such a major leap-forward in general AI capacity will change the pattern of how personalization is conducted.
By leveraging large language models as general-purpose interface, personalization systems may compile user requests into plans.
- Score: 60.5609416496429
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The advent of large language models marks a revolutionary breakthrough in
artificial intelligence. With the unprecedented scale of training and model
parameters, the capability of large language models has been dramatically
improved, leading to human-like performances in understanding, language
synthesizing, and common-sense reasoning, etc. Such a major leap-forward in
general AI capacity will change the pattern of how personalization is
conducted. For one thing, it will reform the way of interaction between humans
and personalization systems. Instead of being a passive medium of information
filtering, large language models present the foundation for active user
engagement. On top of such a new foundation, user requests can be proactively
explored, and user's required information can be delivered in a natural and
explainable way. For another thing, it will also considerably expand the scope
of personalization, making it grow from the sole function of collecting
personalized information to the compound function of providing personalized
services. By leveraging large language models as general-purpose interface, the
personalization systems may compile user requests into plans, calls the
functions of external tools to execute the plans, and integrate the tools'
outputs to complete the end-to-end personalization tasks. Today, large language
models are still being developed, whereas the application in personalization is
largely unexplored. Therefore, we consider it to be the right time to review
the challenges in personalization and the opportunities to address them with
LLMs. In particular, we dedicate this perspective paper to the discussion of
the following aspects: the development and challenges for the existing
personalization system, the newly emerged capabilities of large language
models, and the potential ways of making use of large language models for
personalization.
Related papers
- Personalized Visual Instruction Tuning [30.677058613937067]
multimodal large language models (MLLMs) can engage in general conversations but fail to conduct personalized dialogues targeting at specific individuals.
This deficiency hinders the application of MLLMs in personalized settings, such as tailored visual assistants on mobile devices.
We introduce Personalized Visual Instruction Tuning (PVIT), a novel data curation and training framework designed to enable MLLMs to identify target individuals within an image.
arXiv Detail & Related papers (2024-10-09T17:46:53Z) - Unsupervised Human Preference Learning [7.959043497459107]
Large language models demonstrate impressive reasoning abilities but struggle to provide personalized content.
Existing methods, such as in-context learning and parameter-efficient fine-tuning, fall short in capturing the complexity of human preferences.
We propose a novel approach utilizing small parameter models as preference agents to generate natural language rules that guide a larger, pre-trained model.
arXiv Detail & Related papers (2024-09-30T17:51:01Z) - PEFT-U: Parameter-Efficient Fine-Tuning for User Personalization [9.594958534074074]
We introduce the PEFT-U Benchmark: a new dataset for building and evaluating NLP models for user personalization.
We explore the challenge of efficiently personalizing LLMs to accommodate user-specific preferences in the context of diverse user-centered tasks.
arXiv Detail & Related papers (2024-07-25T14:36:18Z) - LVLM-Interpret: An Interpretability Tool for Large Vision-Language Models [50.259006481656094]
We present a novel interactive application aimed towards understanding the internal mechanisms of large vision-language models.
Our interface is designed to enhance the interpretability of the image patches, which are instrumental in generating an answer.
We present a case study of how our application can aid in understanding failure mechanisms in a popular large multi-modal model: LLaVA.
arXiv Detail & Related papers (2024-04-03T23:57:34Z) - Towards Unified Multi-Modal Personalization: Large Vision-Language Models for Generative Recommendation and Beyond [87.1712108247199]
Our goal is to establish a Unified paradigm for Multi-modal Personalization systems (UniMP)
We develop a generic and personalization generative framework, that can handle a wide range of personalized needs.
Our methodology enhances the capabilities of foundational language models for personalized tasks.
arXiv Detail & Related papers (2024-03-15T20:21:31Z) - Diffusion Language Models Can Perform Many Tasks with Scaling and
Instruction-Finetuning [56.03057119008865]
We show that scaling diffusion language models can effectively make them strong language learners.
We build competent diffusion language models at scale by first acquiring knowledge from massive data.
Experiments show that scaling diffusion language models consistently improves performance across downstream language tasks.
arXiv Detail & Related papers (2023-08-23T16:01:12Z) - A Survey of Large Language Models [81.06947636926638]
Language modeling has been widely studied for language understanding and generation in the past two decades.
Recently, pre-trained language models (PLMs) have been proposed by pre-training Transformer models over large-scale corpora.
To discriminate the difference in parameter scale, the research community has coined the term large language models (LLM) for the PLMs of significant size.
arXiv Detail & Related papers (2023-03-31T17:28:46Z) - Towards Zero-shot Language Modeling [90.80124496312274]
We construct a neural model that is inductively biased towards learning human languages.
We infer this distribution from a sample of typologically diverse training languages.
We harness additional language-specific side information as distant supervision for held-out languages.
arXiv Detail & Related papers (2021-08-06T23:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.