Sparsity-aware neural user behavior modeling in online interaction
platforms
- URL: http://arxiv.org/abs/2202.13491v1
- Date: Mon, 28 Feb 2022 00:27:11 GMT
- Title: Sparsity-aware neural user behavior modeling in online interaction
platforms
- Authors: Aravind Sankar
- Abstract summary: We develop generalizable neural representation learning frameworks for user behavior modeling.
Our problem settings span transductive and inductive learning scenarios.
We leverage different facets of information reflecting user behavior to enable personalized inference at scale.
- Score: 2.4036844268502766
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern online platforms offer users an opportunity to participate in a
variety of content-creation, social networking, and shopping activities. With
the rapid proliferation of such online services, learning data-driven user
behavior models is indispensable to enable personalized user experiences.
Recently, representation learning has emerged as an effective strategy for user
modeling, powered by neural networks trained over large volumes of interaction
data. Despite their enormous potential, we encounter the unique challenge of
data sparsity for a vast majority of entities, e.g., sparsity in ground-truth
labels for entities and in entity-level interactions (cold-start users, items
in the long-tail, and ephemeral groups).
In this dissertation, we develop generalizable neural representation learning
frameworks for user behavior modeling designed to address different sparsity
challenges across applications. Our problem settings span transductive and
inductive learning scenarios, where transductive learning models entities seen
during training and inductive learning targets entities that are only observed
during inference. We leverage different facets of information reflecting user
behavior (e.g., interconnectivity in social networks, temporal and attributed
interaction information) to enable personalized inference at scale. Our
proposed models are complementary to concurrent advances in neural
architectural choices and are adaptive to the rapid addition of new
applications in online platforms.
Related papers
- InFiConD: Interactive No-code Fine-tuning with Concept-based Knowledge Distillation [18.793275018467163]
This paper presents InFiConD, a novel framework that leverages visual concepts to implement the knowledge distillation process.
We develop a novel knowledge distillation pipeline based on extracting text-aligned visual concepts from a concept corpus.
InFiConD's interface allows users to interactively fine-tune the student model by manipulating concept influences directly in the user interface.
arXiv Detail & Related papers (2024-06-25T16:56:45Z) - Online Training of Large Language Models: Learn while chatting [23.995637621755083]
This paper introduces a novel interaction paradigm-'Online Training using External Interactions'-that merges the benefits of persistent, real-time model updates with the flexibility for individual customization.
arXiv Detail & Related papers (2024-03-04T10:00:55Z) - Foundation Models for Decision Making: Problems, Methods, and
Opportunities [124.79381732197649]
Foundation models pretrained on diverse data at scale have demonstrated extraordinary capabilities in a wide range of vision and language tasks.
New paradigms are emerging for training foundation models to interact with other agents and perform long-term reasoning.
Research at the intersection of foundation models and decision making holds tremendous promise for creating powerful new systems.
arXiv Detail & Related papers (2023-03-07T18:44:07Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - Opportunistic Federated Learning: An Exploration of Egocentric
Collaboration for Pervasive Computing Applications [20.61034787249924]
We define a new approach, opportunistic federated learning, in which individual devices belonging to different users seek to learn robust models.
In this paper, we explore the feasibility and limits of such an approach, culminating in a framework that supports encounter-based pairwise collaborative learning.
arXiv Detail & Related papers (2021-03-24T15:30:21Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Learning Contact Dynamics using Physically Structured Neural Networks [81.73947303886753]
We use connections between deep neural networks and differential equations to design a family of deep network architectures for representing contact dynamics between objects.
We show that these networks can learn discontinuous contact events in a data-efficient manner from noisy observations.
Our results indicate that an idealised form of touch feedback is a key component of making this learning problem tractable.
arXiv Detail & Related papers (2021-02-22T17:33:51Z) - Human Trajectory Forecasting in Crowds: A Deep Learning Perspective [89.4600982169]
We present an in-depth analysis of existing deep learning-based methods for modelling social interactions.
We propose two knowledge-based data-driven methods to effectively capture these social interactions.
We develop a large scale interaction-centric benchmark TrajNet++, a significant yet missing component in the field of human trajectory forecasting.
arXiv Detail & Related papers (2020-07-07T17:19:56Z) - I Know Where You Are Coming From: On the Impact of Social Media Sources
on AI Model Performance [79.05613148641018]
We will study the performance of different machine learning models when being learned on multi-modal data from different social networks.
Our initial experimental results reveal that social network choice impacts the performance.
arXiv Detail & Related papers (2020-02-05T11:10:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.