Modeling Proficiency with Implicit User Representations
- URL: http://arxiv.org/abs/2110.08011v1
- Date: Fri, 15 Oct 2021 11:15:17 GMT
- Title: Modeling Proficiency with Implicit User Representations
- Authors: Kim Breitwieser, Allison Lahnala, Charles Welch, Lucie Flek, Martin
Potthast
- Abstract summary: Given a user's posts on a social media platform, the task is to identify the subset of posts or topics for which the user has some level of proficiency.
This enables the filtering and ranking of social media posts on a given topic as per user proficiency.
We investigate five alternative approaches to model proficiency, ranging from basic ones to an advanced, tailored user modeling approach.
- Score: 18.4163404453651
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce the problem of proficiency modeling: Given a user's posts on a
social media platform, the task is to identify the subset of posts or topics
for which the user has some level of proficiency. This enables the filtering
and ranking of social media posts on a given topic as per user proficiency.
Unlike experts on a given topic, proficient users may not have received formal
training and possess years of practical experience, but may be autodidacts,
hobbyists, and people with sustained interest, enabling them to make genuine
and original contributions to discourse. While predicting whether a user is an
expert on a given topic imposes strong constraints on who is a true positive,
proficiency modeling implies a graded scoring, relaxing these constraints. Put
another way, many active social media users can be assumed to possess, or
eventually acquire, some level of proficiency on topics relevant to their
community. We tackle proficiency modeling in an unsupervised manner by
utilizing user embeddings to model engagement with a given topic, as indicated
by a user's preference for authoring related content. We investigate five
alternative approaches to model proficiency, ranging from basic ones to an
advanced, tailored user modeling approach, applied within two real-world
benchmarks for evaluation.
Related papers
- Towards a unified user modeling language for engineering human centered AI systems [1.7450893625541586]
A new wave of intelligent user interfaces, such as AI-based conversational agents, has the potential to enable such personalization.<n>This paper presents the concepts of a unified user modeling language, aimed to combine previous approaches in a single proposal.<n>A proof of concept has been developed that leverages user profiles modeled using our language to automatically adapt a conversational agent.
arXiv Detail & Related papers (2025-05-30T15:20:15Z) - topicwizard -- a Modern, Model-agnostic Framework for Topic Model Visualization and Interpretation [0.0]
We introduce topicwizard, a framework for model-agnostic topic model interpretation.<n>It helps users examine the complex semantic relations between documents, words and topics learned by topic models.
arXiv Detail & Related papers (2025-05-19T12:19:01Z) - User Modeling in Model-Driven Engineering: A Systematic Literature Review [1.7450893625541586]
We conduct a systematic literature review to analyze existing proposals for user modeling in model-driven engineering (MDE) approaches.
The results showcase that there is a lack of a unified and complete user modeling perspective.
This limits the implementation of richer user interfaces able to better support the user-specific needs.
arXiv Detail & Related papers (2024-12-20T13:19:57Z) - Establishing Knowledge Preference in Language Models [80.70632813935644]
Language models are known to encode a great amount of factual knowledge through pretraining.
Such knowledge might be insufficient to cater to user requests.
When answering questions about ongoing events, the model should use recent news articles to update its response.
When some facts are edited in the model, the updated facts should override all prior knowledge learned by the model.
arXiv Detail & Related papers (2024-07-17T23:16:11Z) - The Art of Saying No: Contextual Noncompliance in Language Models [123.383993700586]
We introduce a comprehensive taxonomy of contextual noncompliance describing when and how models should not comply with user requests.
Our taxonomy spans a wide range of categories including incomplete, unsupported, indeterminate, and humanizing requests.
To test noncompliance capabilities of language models, we use this taxonomy to develop a new evaluation suite of 1000 noncompliance prompts.
arXiv Detail & Related papers (2024-07-02T07:12:51Z) - Towards Personalized Evaluation of Large Language Models with An
Anonymous Crowd-Sourcing Platform [64.76104135495576]
We propose a novel anonymous crowd-sourcing evaluation platform, BingJian, for large language models.
Through this platform, users have the opportunity to submit their questions, testing the models on a personalized and potentially broader range of capabilities.
arXiv Detail & Related papers (2024-03-13T07:31:20Z) - A User-Centered, Interactive, Human-in-the-Loop Topic Modelling System [32.065158970382036]
Human-in-the-loop topic modelling incorporates users' knowledge into the modelling process, enabling them to refine the model iteratively.
Recent research has demonstrated the value of user feedback, but there are still issues to consider.
We developed a novel, interactive human-in-the-loop topic modeling system with a user-friendly interface.
arXiv Detail & Related papers (2023-04-04T13:05:10Z) - Modeling User Behaviour in Research Paper Recommendation System [8.980876474818153]
A user intention model is proposed based on deep sequential topic analysis.
The model predicts a user's intention in terms of the topic of interest.
The proposed approach introduces a new road map to model a user activity suitable for the design of a research paper recommendation system.
arXiv Detail & Related papers (2021-07-16T11:31:03Z) - Model Learning with Personalized Interpretability Estimation (ML-PIE) [2.862606936691229]
High-stakes applications require AI-generated models to be interpretable.
Current algorithms for the synthesis of potentially interpretable models rely on objectives or regularization terms.
We propose an approach for the synthesis of models that are tailored to the user.
arXiv Detail & Related papers (2021-04-13T09:47:48Z) - Can You be More Social? Injecting Politeness and Positivity into
Task-Oriented Conversational Agents [60.27066549589362]
Social language used by human agents is associated with greater users' responsiveness and task completion.
The model uses a sequence-to-sequence deep learning architecture, extended with a social language understanding element.
Evaluation in terms of content preservation and social language level using both human judgment and automatic linguistic measures shows that the model can generate responses that enable agents to address users' issues in a more socially appropriate way.
arXiv Detail & Related papers (2020-12-29T08:22:48Z) - Topic Modeling on User Stories using Word Mover's Distance [4.378337862197529]
This paper focuses on topic modeling as a means to identify topics within a large set of crowd-generated user stories.
We evaluate the approaches on a publicly available set of 2,966 user stories written and categorized by crowd workers.
arXiv Detail & Related papers (2020-07-10T11:05:42Z) - Towards Open-World Recommendation: An Inductive Model-based
Collaborative Filtering Approach [115.76667128325361]
Recommendation models can effectively estimate underlying user interests and predict one's future behaviors.
We propose an inductive collaborative filtering framework that contains two representation models.
Our model achieves promising results for recommendation on few-shot users with limited training ratings and new unseen users.
arXiv Detail & Related papers (2020-07-09T14:31:25Z) - Plausible Counterfactuals: Auditing Deep Learning Classifiers with
Realistic Adversarial Examples [84.8370546614042]
Black-box nature of Deep Learning models has posed unanswered questions about what they learn from data.
Generative Adversarial Network (GAN) and multi-objectives are used to furnish a plausible attack to the audited model.
Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
arXiv Detail & Related papers (2020-03-25T11:08:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.