Modeling Dynamic User Interests: A Neural Matrix Factorization Approach
- URL: http://arxiv.org/abs/2102.06602v1
- Date: Fri, 12 Feb 2021 16:24:21 GMT
- Title: Modeling Dynamic User Interests: A Neural Matrix Factorization Approach
- Authors: Paramveer Dhillon and Sinan Aral
- Abstract summary: We propose a model that combines the simplicity of matrix factorization with the flexibility of neural networks.
Our model decomposes a user's content consumption journey into nonlinear user and content factors.
We use our model to understand the dynamic news consumption interests of Boston Globe readers over five years.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, there has been significant interest in understanding users'
online content consumption patterns. But, the unstructured, high-dimensional,
and dynamic nature of such data makes extracting valuable insights challenging.
Here we propose a model that combines the simplicity of matrix factorization
with the flexibility of neural networks to efficiently extract nonlinear
patterns from massive text data collections relevant to consumers' online
consumption patterns. Our model decomposes a user's content consumption journey
into nonlinear user and content factors that are used to model their dynamic
interests. This natural decomposition allows us to summarize each user's
content consumption journey with a dynamic probabilistic weighting over a set
of underlying content attributes. The model is fast to estimate, easy to
interpret and can harness external data sources as an empirical prior. These
advantages make our method well suited to the challenges posed by modern
datasets. We use our model to understand the dynamic news consumption interests
of Boston Globe readers over five years. Thorough qualitative studies,
including a crowdsourced evaluation, highlight our model's ability to
accurately identify nuanced and coherent consumption patterns. These results
are supported by our model's superior and robust predictive performance over
several competitive baseline methods.
Related papers
- Corpus Considerations for Annotator Modeling and Scaling [9.263562546969695]
We show that the commonly used user token model consistently outperforms more complex models.
Our findings shed light on the relationship between corpus statistics and annotator modeling performance.
arXiv Detail & Related papers (2024-04-02T22:27:24Z) - IGANN Sparse: Bridging Sparsity and Interpretability with Non-linear Insight [4.010646933005848]
IGANN Sparse is a novel machine learning model from the family of generalized additive models.
It promotes sparsity through a non-linear feature selection process during training.
This ensures interpretability through improved model sparsity without sacrificing predictive performance.
arXiv Detail & Related papers (2024-03-17T22:44:36Z) - Dynamic Model Switching for Improved Accuracy in Machine Learning [0.0]
We introduce an adaptive ensemble that intuitively transitions between CatBoost and XGBoost.
The user sets a benchmark, say 80% accuracy, prompting the system to dynamically shift to the new model only if it guarantees improved performance.
This dynamic model-switching mechanism aligns with the evolving nature of data in real-world scenarios.
arXiv Detail & Related papers (2024-01-31T00:13:02Z) - CoDBench: A Critical Evaluation of Data-driven Models for Continuous
Dynamical Systems [8.410938527671341]
We introduce CodBench, an exhaustive benchmarking suite comprising 11 state-of-the-art data-driven models for solving differential equations.
Specifically, we evaluate 4 distinct categories of models, viz., feed forward neural networks, deep operator regression models, frequency-based neural operators, and transformer architectures.
We conduct extensive experiments, assessing the operators' capabilities in learning, zero-shot super-resolution, data efficiency, robustness to noise, and computational efficiency.
arXiv Detail & Related papers (2023-10-02T21:27:54Z) - StableLLaVA: Enhanced Visual Instruction Tuning with Synthesized
Image-Dialogue Data [129.92449761766025]
We propose a novel data collection methodology that synchronously synthesizes images and dialogues for visual instruction tuning.
This approach harnesses the power of generative models, marrying the abilities of ChatGPT and text-to-image generative models.
Our research includes comprehensive experiments conducted on various datasets.
arXiv Detail & Related papers (2023-08-20T12:43:52Z) - Dynamic Latent Separation for Deep Learning [67.62190501599176]
A core problem in machine learning is to learn expressive latent variables for model prediction on complex data.
Here, we develop an approach that improves expressiveness, provides partial interpretation, and is not restricted to specific applications.
arXiv Detail & Related papers (2022-10-07T17:56:53Z) - A Graph-Enhanced Click Model for Web Search [67.27218481132185]
We propose a novel graph-enhanced click model (GraphCM) for web search.
We exploit both intra-session and inter-session information for the sparsity and cold-start problems.
arXiv Detail & Related papers (2022-06-17T08:32:43Z) - Modeling Dynamic Attributes for Next Basket Recommendation [60.72738829823519]
We argue that modeling such dynamic attributes can boost recommendation performance.
We propose a novel Attentive network to model Dynamic attributes (named AnDa)
arXiv Detail & Related papers (2021-09-23T21:31:17Z) - Meta-learning using privileged information for dynamics [66.32254395574994]
We extend the Neural ODE Process model to use additional information within the Learning Using Privileged Information setting.
We validate our extension with experiments showing improved accuracy and calibration on simulated dynamics tasks.
arXiv Detail & Related papers (2021-04-29T12:18:02Z) - Uncover Residential Energy Consumption Patterns Using Socioeconomic and
Smart Meter Data [2.9340691207364786]
We analyze the real-world smart meter data and extract load patterns using K-Medoids clustering.
We develop an analytical framework with feature selection and deep learning models to estimate the relationship between load patterns and socioeconomic features.
arXiv Detail & Related papers (2021-04-12T01:57:14Z) - Explainable Matrix -- Visualization for Global and Local
Interpretability of Random Forest Classification Ensembles [78.6363825307044]
We propose Explainable Matrix (ExMatrix), a novel visualization method for Random Forest (RF) interpretability.
It employs a simple yet powerful matrix-like visual metaphor, where rows are rules, columns are features, and cells are rules predicates.
ExMatrix applicability is confirmed via different examples, showing how it can be used in practice to promote RF models interpretability.
arXiv Detail & Related papers (2020-05-08T21:03:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.